Re: [openstack-dev] [storyboard] Prioritization?

2018-09-26 Thread Adam Coldrick
On Tue, 2018-09-25 at 18:40 +, CARVER, PAUL wrote:
[...]
> There is certainly room for additional means of juggling and
> discussing/negotiating priorities in the stages before work really gets
> under way, but if it doesn't eventually become clear
> 
> 1) who's doing the work
> 2) when are they targeting completion
> 3) what (if anything) is higher up on their todo list

Its entirely possible to track these three things in StoryBoard today, and
for other people to view that information.

1) Task assignee, though this should be set when someone actually starts
doing the work rather than being used to indicate "$person intends to do
this at some point"
2) Due date on a card in a board
3) Lanes in that board ordered by priority

The latter two assume that the person doing the work is using a board to
track what they're doing, which is probably sensible behaviour we should
encourage.

Its admittedly difficult for downstream consumers to quickly find the
board-related information, but I think that is a discoverability bug (it
doesn't currently become clear where exactly these things can be found)
rather than a fundamental issue which means we should just abandon the
multi-dimensional approach.

> then it's impossible for anyone else to make any sort of plans that
> depend on that work. Plans could include figuring out how to add more
> resources or contingency plans. It's also possible that people or
> projects may develop a reputation for not delivering on their stated top
> priorities, but that's at least better than having no idea what the
> priorities are because every person and project is making up their own
> system for tracking it.

I would argue that someone who wants to make plans based on upstream work
that may or may not get done should be taking the time (which should at
worst be something like "reading the docs to find a link to a
worklist/board" even with the current implementation) to understand how
upstream are expressing the state of their work, though I can understand
why they might not always want to. I definitely think that defining an
"official"-ish approach is something that should probably be done, to
reduce the cognitive load on newcomers.

- Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] ceph osd deploy fails

2018-09-26 Thread Eduardo Gonzalez
Hi, what version of rocky are you using. Maybe was in the middle of a
backport which temporally broke ceph.

Could you try latest stable/rocky branch?

It is now working properly.

Regards

On Wed, Sep 26, 2018, 2:32 PM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:

> Hi,
>
> I tried to deploy Rocky in a multinode setup but ceph-osd fails with:
>
>
> failed: [xxx-poc2] (item=[0, {u'fs_uuid': u'', u'bs_wal_label':
> u'', u'external_journal': False, u'bs_blk_label': u'',
> u'bs_db_partition_num': u'', u'journal_device': u'', u'journal': u'',
> u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'',
> u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'',
> u'partition_num': u'1', u'bs_db_label': u'', u'bs_blk_partition_num':
> u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'',
> u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS', u'bs_blk_device':
> u''}]) => {
>  "changed": true,
>  "item": [
>  0,
>  {
>  "bs_blk_device": "",
>  "bs_blk_label": "",
>  "bs_blk_partition_num": "",
>  "bs_db_device": "",
>  "bs_db_label": "",
>  "bs_db_partition_num": "",
>  "bs_wal_device": "",
>  "bs_wal_label": "",
>  "bs_wal_partition_num": "",
>  "device": "/dev/nvme0n1",
>  "external_journal": false,
>  "fs_label": "",
>  "fs_uuid": "",
>  "journal": "",
>  "journal_device": "",
>  "journal_num": 0,
>  "partition": "/dev/nvme0n1",
>  "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",
>  "partition_num": "1"
>  }
>  ]
> }
>
> MSG:
>
> Container exited with non-zero return code 2
>
> We tried to debug the error message by starting the container with a
> modified endpoint but we are stuck at the following point right now:
>
>
> docker run  -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e
> "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e
> "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e
> "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e
> "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e
> "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e "OSD_BS_DEV=/dev/nvme0n1"
> -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1" -e
> "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e
> "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e "OSD_INITIAL_WEIGHT=1"
> -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e
> "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false"   -v
> "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v
> "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v
> "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint
> /bin/bash
>
> 10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3
>
>
>
> cat /var/lib/kolla/config_files/ceph.client.admin.keyring >
> /etc/ceph/ceph.client.admin.keyring
>
>
> cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf
>
>
> (bootstrap-osd-0)[root@985e2dee22bc /]# /usr/bin/ceph-osd -d
> --public-addr 10.0.153.11 --cluster-addr 10.0.153.11
> usage: ceph-osd -i  [flags]
>--osd-data PATH data directory
>--osd-journal PATH
>  journal file or block device
>--mkfscreate a [new] data directory
>--mkkey   generate a new secret key. This is normally used in
> combination with --mkfs
>--convert-filestore
>  run any pending upgrade operations
>--flush-journal   flush all data out of journal
>--mkjournal   initialize a new journal
>--check-wants-journal
>  check whether a journal is desired
>--check-allows-journal
>  check whether a journal is allowed
>--check-needs-journal
>  check whether a journal is required
>--debug_osdset debug level (e.g. 10)
>--get-device-fsid PATH
>  get OSD fsid for the given block device
>
>--conf/-c FILEread configuration from the given configuration file
>--id/-i IDset ID portion of my name
>--name/-n TYPE.ID set name
>--cluster NAMEset cluster name (default: ceph)
>--setuser USERset uid to user or uid (and gid to user's gid)
>--setgroup GROUP  set gid to group or gid
>--version show version and quit
>
>-drun in foreground, log to stderr.
>-frun in foreground, log to usual location.
>--debug_ms N  set message debug level (e.g. 1)
> 2018-09-26 12:28:07.801066 7fbda64b4e40  0 ceph version 12.2.4
> (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process
> (unknown), pid 46
> 2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #' where #
> is the osd number
>
>
> But it looks like "-i" is not set anywere?
>
> grep command
> 

[openstack-dev] [mistral] Extend created(updated)_at by started(finished)_at to clarify the duration of the task

2018-09-26 Thread Олег Овчарук
Hi everyone! Please take a look to the blueprint that i've just created
https://blueprints.launchpad.net/mistral/+spec/mistral-add-started-finished-at

I'd like to implement this feature, also I want to update CloudFlow when
this will be done. Please let me know in the blueprint if I can start
implementing.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Horizon gates are broken

2018-09-26 Thread Ivan Kolodyazhny
Hi all,

Patch [1]  is merged and our gates are un-blocked now. I went throw review
list and post 'recheck' where it was needed.

We need to cherry-pick this fix to stable releases too. I'll do it asap

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny  wrote:

> Hi team,
>
> Unfortunately, horizon gates are broken now. We can't merge any patch due
> to the -1 from CI.
> I don't want to disable tests now, that's why I proposed a fix [1].
>
> We'd got released some of XStatic-* packages last week. At least new
> XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for
> requirements repo [4] to prevent such issues in the future.
>
> Please, do not try 'recheck' until [1] will be merged.
>
> [1] https://review.openstack.org/#/c/604611/
> [2] https://pypi.org/project/XStatic-jQuery/#history
> [3] https://bugs.launchpad.net/horizon/+bug/1794028
> [4] https://review.openstack.org/#/c/604613/
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Sphinx 'doctrees' bug recently resolved

2018-09-26 Thread Stephen Finucane
FYI, Sphinx 1.8.0 contained a minor issue (introduced by yours truly)
which resulted in an incorrect default doctree directory [1]. This
would manifest itself in a 'docs/source/.doctree' directory being
created and would only affects projects that call 'sphinx-build' from
tox without specifying the '-d' parameter.

This has been fixed in Sphinx 1.8.1. I don't think it's a significant
enough issue to blacklist the 1.8.0 version and if you were seeing
issues with this in recent weeks, the issue should now be resolved. You
can still merge patches which explicitly set the '-d' parameter but
they should be unnecessary now.

Stephen

[1] https://github.com/sphinx-doc/sphinx/issues/5418


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] why use different "bug" tags per project?

2018-09-26 Thread Jeremy Stanley
On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote:
> At the PTG, it was suggested that each project should tag their bugs with
> "-bug" to avoid tags being "leaked" across projects, or something
> like that.
> 
> Could someone elaborate on why this was recommended?  It seems to me that
> it'd be better for all projects to just use the "bug" tag for consistency.
> 
> If you want to get all bugs in a specific project it would be pretty easy to
> search for stories with a tag of "bug" and a project of "X".

Because stories are a cross-project concept and tags are applied to
the story, it's possible for a story with tasks for both
openstack/nova and openstack/cinder projects to represent a bug for
one and a new feature for the other. If they're tagged nova-bug and
cinder-feature then that would allow them to match the queries those
teams have defined for their worklists, boards, et cetera. It's of
course possible to just hand-wave that these intersections are rare
enough to ignore and go ahead and use generic story tags, but the
recommendation is there to allow teams to avoid disagreements in
such cases.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core

2018-09-26 Thread Surya Singh
+1

On Tue, Sep 25, 2018 at 9:17 PM Eduardo Gonzalez  wrote:

> Hi,
>
> I would like to propose Chason Chan to the kolla-ansible core team.
>
> Chason is been working on addition of Vitrage roles, rework VpnaaS
> service, maintaining
> documentation as well as fixing many bugs.
>
> Voting will be open for 14 days (until 9th of Oct).
>
> Kolla-ansible cores, please leave a vote.
> Consider this mail my +1 vote
>
> Regards,
> Eduardo
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for 'T' release

2018-09-26 Thread Colleen Murphy
On Mon, Sep 24, 2018, at 3:22 PM, Kashyap Chamarthy wrote:
> Hey folks,
> 
> Before we bump the agreed upon[1] minimum versions for libvirt and QEMU
> for 'Stein', we need to do the tedious work of picking the NEXT_MIN_*
> versions for the 'T' (which is still in the naming phase) release, which
> will come out in the autumn (Sep-Nov) of 2019.
> 
> Proposal
> 
> 
> Looking at the DistroSupportMatrix[2], it seems like we can pick the
> libvirt and QEMU versions supported by the next LTS release of Ubuntu --
> 18.04; "Bionic", which are:
> 
> libvirt: 4.0.0
> QEMU:2.11
> 
> Debian, Fedora, Ubuntu (Bionic), openSUSE currently already ship the
> above versions.  And it seems reasonable to assume that the enterprise
> distribtions will also ship the said versions pretty soon; but let's
> double-confirm below.
> 
> Considerations and open questions
> -
> 
> (a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that:
> "IBM announced that KVM for IBM z will be withdrawn, effective March
> 31, 2018 [...] development will not only continue unaffected, but
> the options for users grow, especially with the recent addition of
> SuSE to the existing support in Ubuntu."
> 
> The message seems to be: "use a regular distribution".  So this is
> covered, if we a version based on other distributions.
> 
> (b) Oracle Linux: Can you please confirm if you'll be able to
> release libvirt and QEMU to 4.0.0 and 2.11, respectively?
> 
> (c) SLES: Same question as above.

Already responded on IRC and on the patch, but to close the loop here: these 
should be fine for the next versions of SLES, thanks for checking.

Colleen

> 
> Assuming Oracle Linux and SLES confirm, please let us know if there are
> any objections if we pick NEXT_MIN_* versions for the OpenStack 'T'
> release to be libvirt: 4.0.0 and QEMU: 2.11.
> 
> * * *
> 
> A refresher on libvirt and QEMU release schedules
> -
> 
>   - There will be at least 12 libvirt releases (_excluding_ maintenance
> releases) by Autumn 2019.  A new libvirt release comes out every
> month[4].
> 
>   - And there will be about 4 releases of QEMU.  A new QEMU release
> comes out once every four months.
> 
> [1] http://git.openstack.org/cgit/openstack/nova/commit/?h=master=28d337b
> -- Pick next minimum libvirt / QEMU versions for "Stein"
> [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
> [3] http://kvmonz.blogspot.com/2017/03/kvm-for-ibm-z-withdrawal.html
> [4] https://libvirt.org/downloads.html#schedule
> 
> -- 
> /kashyap
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Jay Pipes

On 09/26/2018 05:10 AM, Colleen Murphy wrote:

Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:





We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:

* Keystone to trust a token from an ID Provider master and when the auth
method is called, perform an idempotent creation of the user, project
and role assignments according to the assertions made in the token


This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.


* Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering


I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?


Hi Colleen,

I wasn't in the room for this conversation either, but I believe the 
"use case" wanted here is mostly a convenience one. If the edge 
deployment is composed of hundreds of small Keystone installations and 
you have a user (e.g. an NFV MANO user) which should have visibility 
across all of those Keystone installations, it becomes a hassle to need 
to remember (or in the case of headless users, store some lookup of) all 
the different tenant and user UUIDs for what is essentially the same 
user across all of those Keystone installations.


I'd argue that as long as it's possible to create a Keystone tenant and 
user with a unique name within a deployment, and as long as it's 
possible to authenticate using the tenant and user *name* (i.e. not the 
UUID), then this isn't too big of a problem. However, I do know that a 
bunch of scripts and external tools rely on setting the tenant and/or 
user via the UUID values and not the names, so that might be where this 
feature request is coming from.


Hope that makes sense?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables

2018-09-26 Thread Sean McGinnis
During the Stein PTG in Denver, the release management team talked about ways
we can make things simpler and reduce the "paper pushing" work that all teams
need to do right now. One topic that came up was the usefulness of pushing tags
around milestones during the cycle.

There were a couple of needs identified for doing such "milestone releases":
1) It tests the release automation machinery to identify problems before
   the RC and final release crunch time.
2) It creates a nice cadence throughout the cycle to help teams stay on
   track and focus on the right things for each phase of the cycle.
3) It gives us an indication that teams are healthy, active, and planning
   to include their components in the final release.

One of the big motivators in the past was also to have output that downstream
distros and users could pick up for testing and early packaging. Based on our
admittedly anecdotal small sample, it doesn't appear this is actually a big
need, so we propose to stop tagging milestone releases for the
cycle-with-milestone projects.

We would still have "milestones" during the cycle to facilitate work
organization and create a cadence: teams should still be aware of them, and we
will continue to communicate those dates in the schedule and in the release
countdown emails. But you would no longer be required to request a release for 
each milestone.

Beta releases would be optional: if teams do want to have some beta version
tags before the final release they can still request them - whether on one of
the milestone dates, or whenever there is the need for the project.

Release candidates would still require a tag. To facilitate that step and
guarantee we have a release candidate for every deliverable, the release team
proposes to automatically generate a release request early in the week of the
RC deadline. That patch would be used as a base to communicate with the team:
if a team wants to wait for a specific patch to make it to the RC, someone from
the team can -1 the patch to have it held, or update that patch with a
different commit SHA. If there are no issues, ideally we would want a +1 from
the PTL and/or release liaison to indicate approval, but we would also consider
no negative feedback as an indicator that the automatically proposed patches
without a -1 can all be approved at the end of the RC deadline week.

To cover point (3) above, and clearly know that a project is healthy and should
be included in the coordinated release, we are thinking of requiring a person 
for each team to add their name to a "manifest" of sorts for the release cycle.
That "final release liaison" person would be the designated person to follow
through on finishing out the releases for that team, and would be designated
ahead of the final release phases.

With all these changes, we would rename the cycle-with-milestones release
model to something like cycle-with-rc.

FAQ:
Q: Does this mean I don't need to pay attention to releases any more and the
   release team will just take care of everything?
A: No. We still want teams engaged in the release cycle and would feel much
   more comfortable if we get an explicit +1 from the team on any proposed tags
   or releases.

Q: Who should sign up to be the final release liaison ?
A: Anyone in the team really. Could be the PTL, the standing release liaison,
   or someone else stepping up to cover that role.

--
Thanks!
The Release Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] OVB 1.0 and Upcoming Changes

2018-09-26 Thread Ben Nemec
(this is a reprint of a blog post I just made. I'm sending it here 
explicitly too because most (maybe all) of the major users are here. See 
also http://blog.nemebean.com/content/ovb-10-and-upcoming-changes)


The time has come to declare a 1.0 version for OVB. There are a couple 
of reasons for this:


1. OVB has been stable for quite a while
2. It's time to start dropping support for ancient behaviors/clouds

The first is somewhat self-explanatory. Since its inception, I have 
attempted to maintain backward compatibility to the earliest deployments 
of OVB. This hasn't always been 100% successful, but when 
incompatibilities were introduced they were considered bugs that had to 
be fixed. At this point the OVB interface has been stable for a 
significant period of time and it's time to lock that in.


However, on that note it is also time to start dropping support for some 
of those earliest environments. The limitations of the original 
architecture make it more and more difficult to implement new features 
and there are very few to no users still relying on it. Declaring a 1.0 
and creating a stable branch for it should allow us to move forward with 
new features while still providing a fallback for anyone who might still 
be using OVB on a Kilo-based cloud (for example). I'm not aware of any 
such users, but that doesn't mean they don't exist.


Specifically, the following changes are expected for OVB 2.0:

* Minimum host cloud version of Newton. This allows us to default to 
using Neutron port-security, which will simplify the configuration 
matrix immensely.
* Drop support for parameters in environment files. All OVB 
configuration environment files should be using parameter_defaults now 
anyway, and some upcoming features require us to force the switch. This 
shouldn't be too painful as it mostly requires 
s/parameters:/parameter_defaults:/ in any existing environments.
* Part of the previous point is a change to how ports and networks are 
created. This means that if users have created custom port or network 
layouts they will need to update their templates to reflect the new way 
of passing in network details. I don't know that anyone has done this, 
so I expect the impact to be small.


The primary motivation for these changes is the work to support routed 
networks in OVB[1]. It requires customization of some networks that were 
hard-coded in the initial version of OVB, which means that making them 
configurable without breaking compatibility would be 
difficult/impossible. Since the necessary changes should only break very 
old style deployments, I feel it is time to make a clean cut and move on 
from them. As I noted earlier, I don't believe this will actually affect 
many OVB users, if any.


If these changes do sound like they may break you, please contact me 
ASAP. It would be a good idea to test your use-case against the 
routed-networks branch[1] to make sure it still works. If so, great! 
There's nothing to do. That branch already includes most of the breaking 
changes. If not, we can investigate how to maintain compatibility, or if 
that's not possible you may need to continue using the 1.0 branch of OVB 
which will exist indefinitely for users who still absolutely need the 
old behaviors and can't move forward for any reason. There is currently 
no specific timeline for when these changes will merge back to master, 
but I hope to get it done in the relatively near future. Don't 
procrastinate. :-)


Some of these changes have been coming for a while - the lack of 
port-security in the default templates is starting to cause more grief 
than maintaining backward compatibility saves. The routed networks 
architecture is a realization of the original goal for OVB, which is to 
deploy arbitrarily complex environments for testing deployment tools. If 
you want some geek porn, check out this network diagram[2] for routed 
networks. It's pretty cool to be able to deploy such a complex 
environment with a couple of configuration files and a single command. 
Once it is possible to customize all the networks it should be possible 
to deploy just about any environment imaginable (challenge accepted... 
;-). This is a significant milestone for OVB and I look forward to 
seeing it in action.


-Ben

1: 
https://github.com/cybertron/openstack-virtual-baremetal/tree/routed-networks

2: https://plus.google.com/u/0/+BenNemec/posts/5nGJ3Rzt2iL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taas] rocky

2018-09-26 Thread Miguel Lavalle
Thanks Takashi

On Wed, Sep 26, 2018 at 4:57 AM Takashi Yamamoto 
wrote:

> hi,
>
> it seems we forgot to create rocky branch.
> i'll make a release and the branch sooner or later, unless someone
> beat me to do so.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] review runways for Stein are open

2018-09-26 Thread melanie witt
Just wanted to remind everyone that review runways for Stein are OPEN. 
Please feel free to add your approved, ready-for-review blueprints to 
the queue:


https://etherpad.openstack.org/p/nova-runways-stein

Cheers,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Morgan Fainberg
This discussion was also not about user assigned IDs, but predictable IDs
with the auto provisioning. We still want it to be something keystone
controls (locally). It might be hash domain ID and value from assertion (
similar.to the LDAP user ID generator). As long as within an environment,
the IDs are predictable when auto provisioning via federation, we should be
good. And the problem of the totally unknown ID until provisioning could be
made less of an issue for someone working within a massively federated edge
environment.

I don't want user/explicit admin set IDs.

On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:

> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
> > Thanks for the summary, Ildiko. I have some questions inline.
> >
> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
> >
> > 
> >
> >>
> >> We agreed to prefer federation for Keystone and came up with two work
> >> items to cover missing functionality:
> >>
> >> * Keystone to trust a token from an ID Provider master and when the auth
> >> method is called, perform an idempotent creation of the user, project
> >> and role assignments according to the assertions made in the token
> >
> > This sounds like it is based on the customizations done at Oath, which
> to my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
> >
> >> * Keystone should support the creation of users and projects with
> >> predictable UUIDs (eg.: hash of the name of the users and projects).
> >> This greatly simplifies Image federation and telemetry gathering
> >
> > I was in and out of the room and don't recall this discussion exactly.
> We have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Hi Colleen,
>
> I wasn't in the room for this conversation either, but I believe the
> "use case" wanted here is mostly a convenience one. If the edge
> deployment is composed of hundreds of small Keystone installations and
> you have a user (e.g. an NFV MANO user) which should have visibility
> across all of those Keystone installations, it becomes a hassle to need
> to remember (or in the case of headless users, store some lookup of) all
> the different tenant and user UUIDs for what is essentially the same
> user across all of those Keystone installations.
>
> I'd argue that as long as it's possible to create a Keystone tenant and
> user with a unique name within a deployment, and as long as it's
> possible to authenticate using the tenant and user *name* (i.e. not the
> UUID), then this isn't too big of a problem. However, I do know that a
> bunch of scripts and external tools rely on setting the tenant and/or
> user via the UUID values and not the names, so that might be where this
> feature request is coming from.
>
> Hope that makes sense?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ryu integration with Openstack

2018-09-26 Thread Niket Agrawal
Hello,

I have a question regarding the Ryu integration in Openstack. By default,
the openvswitch bridges (br-int, br-tun and br-ex) are registered to a
controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute
node. However there is a different instance of the same Ryu controller
running on the neutron gateway as well and the three openvswitch bridges
(br-int, br-tun and br-ex) are registered to this instance of Ryu
controller. If I stop neutron-openvswitch agent on the nova compute node,
the bridges there are no longer connected to the controller, but the
bridges in the neutron gateway continue to remain connected to the
controller. Only when I stop the neutron openvswitch agent in the neutron
gateway as well, the bridges there get disconnected.

I'm unable to find where in the Openstack code I can access this
implementation, because I intend to make a few tweaks to this architecture
which is present currently. Also, I'd like to know which app is the Ryu SDN
controller running by default at the moment. I feel the information in the
code can help me find it too.

Regards,
Niket
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Giulio Fidente
hi,

thanks for sharing this!

At TripleO we're looking at implementing in Stein deployment of at least
1 regional DC and N edge zones. More comments below.

On 9/25/18 11:21 AM, Ildiko Vancsa wrote:
> Hi,
>
> Hereby I would like to give you a short summary on the discussions
that happened at the PTG in the area of edge.
>
> The Edge Computing Group sessions took place on Tuesday where our main
activity was to draw an overall architecture diagram to capture the
basic setup and requirements of edge towards a set of OpenStack
services. Our main and initial focus was around Keystone and Glance, but
discussion with other project teams such as Nova, Ironic and Cinder also
happened later during the week.
>
> The edge architecture diagrams we drew are part of a so called Minimum
Viable Product (MVP) which refers to the minimalist nature of the setup
where we didn’t try to cover all aspects but rather define a minimum set
of services and requirements to get to a functional system. This
architecture will evolve further as we collect more use cases and
requirements.
>
> To describe edge use cases on a higher level with Mobile Edge as a use
case in the background we identified three main building blocks:
>
> * Main or Regional Datacenter (DC)
> * Edge Sites
> * Far Edge Sites or Cloudlets
>
> We examined the architecture diagram with the following user stories
in mind:
>
> * As a deployer of OpenStack I want to minimize the number of control
planes I need to manage across a large geographical region.
> * As a user of OpenStack I expect instance autoscale continues to
function in an edge site if connectivity is lost to the main datacenter.
> * As a deployer of OpenStack I want disk images to be pulled to a
cluster on demand, without needing to sync every disk image everywhere.
> * As a user of OpenStack I want to manage all of my instances in a
region (from regional DC to far edge cloudlets) via a single API endpoint.
>
> We concluded to talk about service requirements in two major categories:
>
> 1. The Edge sites are fully operational in case of a connection loss
between the Regional DC and the Edge site which requires control plane
services running on the Edge site
> 2. Having full control on the Edge site is not critical in case a
connection loss between the Regional DC and an Edge site which can be
satisfied by having the control plane services running only in the
Regional DC
>
> In the first case the orchestration of the services becomes harder and
is not necessarily solved yet, while in the second case you have
centralized control but losing functionality on the Edge sites in the
event of a connection loss.
>
> We did not discuss things such as HA at the PTG and we did not go into
details on networking during the architectural discussion either.

while TripleO used to rely on pacemaker to manage cinder-volume A/P in
the controlplane, we'd like to push for cinder-volume A/A in the edge
zone and avoid the deployment of pacemaker in the edge zones

the safety of cinder-volume A/A seems to depend mostly on the backend
driver and for RBD we should be good

> We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:
>
> * Keystone to trust a token from an ID Provider master and when the
auth method is called, perform an idempotent creation of the user,
project and role assignments according to the assertions made in the token
> * Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering
>
> For Glance we explored image caching and spent some time discussing
the option to also cache metadata so a user can boot new instances at
the edge in case of a network connection loss which would result in
being disconnected from the registry:
>
> * I as a user of Glance, want to upload an image in the main
datacenter and boot that image in an edge datacenter. Fetch the image to
the edge datacenter with its metadata
>
> We are still in the progress of documenting the discussions and draw
the architecture diagrams and flows for Keystone and Glance.

for glance we'd like to deploy only one glance-api in the regional dc
and configure glance/cache in each edge zone ... pointing all instances
to a shared database

this should solve the metadata problem and also provide for storage
"locality" into every edge zone

> In addition to the above we went through Dublin PTG wiki
(https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG)
capturing requirements:
>
> * we agreed to consider the list of requirements on the wiki finalized
for now
> * agreed to move there the additional requirements listed on the Use
Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases)
wiki page
>
> For the details on the discussions with related OpenStack projects you
can check the following etherpads for notes:
>
> * Cinder:

[openstack-dev] [goal][python3] week 7 update

2018-09-26 Thread Doug Hellmann

This is week 7 of the "Run under Python 3 by default" goal
(https://governance.openstack.org/tc/goals/stein/python3-first.html).

== Things We Learned This Week ==

When we updated the tox.ini settings for jobs like pep8 and release
notes early in the Rocky session we only touched some of the official
repositories. I'll be working on making a list of the ones we missed so
we can update them by the end of Stein.

== Ongoing and Completed Work ==

Teams are making great progress, but it looks like we have some
lingering changes in branches where the test jobs are failing.

+-+-+--+-+--+-++---++
| Team| zuul| tox defaults | Docs| 3.6 unit | Failing | 
Unreviewed | Total | Champion   |
+-+-+--+-+--+-++---++
| adjutant| +   | -| -   | +|   0 | 
 0 | 5 | Doug Hellmann  |
| barbican|  11/ 13 | +|   1/  3 | +|   6 | 
 4 |20 | Doug Hellmann  |
| blazar  | +   | +| +   | +|   0 | 
 0 |25 | Nguyen Hai |
| Chef OpenStack  | +   | -| -   | -|   0 | 
 0 | 1 | Doug Hellmann  |
| cinder  | +   | +| +   | +|   0 | 
 0 |31 | Doug Hellmann  |
| cloudkitty  | +   | +| +   | +|   0 | 
 0 |24 | Doug Hellmann  |
| congress| +   | +| +   | +|   0 | 
 0 |24 | Nguyen Hai |
| cyborg  | +   | +| +   | +|   0 | 
 0 |16 | Nguyen Hai |
| designate   | +   | +| +   | +|   0 | 
 0 |24 | Nguyen Hai |
| Documentation   | +   | +| +   | +|   0 | 
 0 |22 | Doug Hellmann  |
| dragonflow  | +   | -| +   | +|   0 | 
 0 | 6 | Nguyen Hai |
| ec2-api | +   | -| +   | +|   0 | 
 0 |12 ||
| freezer |   3/ 23 | +| +   |   2/  4  |   2 | 
 0 |33 ||
| glance  | +   |   1/  4  | +   | +|   0 | 
 0 |26 | Nguyen Hai |
| heat|   3/ 27 |   1/  5  |   1/  6 |   1/  7  |   3 | 
 2 |45 | Doug Hellmann  |
| horizon | +   | +| +   | +|   0 | 
 0 |11 | Nguyen Hai |
| I18n| +   | -| -   | -|   0 | 
 0 | 2 | Doug Hellmann  |
| InteropWG   | +   | -| +   |   1/  3  |   0 | 
 0 |10 | Doug Hellmann  |
| ironic  |  12/ 60 | +|   2/ 13 |   1/ 12  |   0 | 
 0 |90 | Doug Hellmann  |
| karbor  | +   | +| +   | +|   0 | 
 0 |22 | Nguyen Hai |
| keystone| +   | +| +   | +|   0 | 
 0 |47 | Doug Hellmann  |
| kolla   | +   | -| +   | +|   0 | 
 0 |12 ||
| kuryr   | +   | +| +   | +|   0 | 
 0 |19 | Doug Hellmann  |
| magnum  | +   | +| +   | +|   0 | 
 0 |24 ||
| manila  |   3/ 19 | +| +   | +|   3 | 
 3 |28 | Goutham Pacha Ravi |
| masakari| +   | +| +   | -|   0 | 
 0 |21 | Nguyen Hai |
| mistral | +   | +| +   | +|   0 | 
 0 |37 | Nguyen Hai |
| monasca |   1/ 66 |   1/  7  | +   | +|   2 | 
 1 |90 | Doug Hellmann  |
| murano  | +   | +| +   | +|   0 | 
 0 |37 ||
| neutron |  21/ 73 | +|   2/ 14 |   2/ 13  |  11 | 
12 |   106 | Doug Hellmann  |
| nova| +   | +| +   | +|   0 | 
 0 |37 ||
| octavia | +   | +| +   | +|   0 | 
 0 |34 | Nguyen Hai |
| OpenStack Charms|  17/117 | -| -   | -|  14 | 
17 |   117 | Doug Hellmann  

[openstack-dev] [Heat] Bug in documentation?

2018-09-26 Thread Postlbauer, Juan
Hi everyone:

 

I see that heat doc
https://docs.openstack.org/heat/rocky/template_guide/openstack.html#OS::Nova
::Flavor states that

ram
 ¶

Memory in MB for the flavor.

disk
 ¶

Size of local disk in GB. 

 

That would be 1000*1000 for ram and 1000*1000*1000 for disk.

 

But Nova doc https://developer.openstack.org/api-ref/compute/#create-flavor
states that:


ram

body

integer

The amount of RAM a flavor has, in MiB.


disk

body

integer

The size of the root disk that will be created in GiB. 

 

That would be 1024*1024 for ram and 1024*1024*1024 for disk. Which, at least
for ram, makes much more sense to me.

 

Is this a typo in Heat documentation?

 

Best Regards,

   Juan Postlbauer

 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] why use different "bug" tags per project?

2018-09-26 Thread Tom Barron

On 26/09/18 09:45 -0500, Ben Nemec wrote:



On 9/26/18 8:20 AM, Jeremy Stanley wrote:

On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote:

At the PTG, it was suggested that each project should tag their bugs with
"-bug" to avoid tags being "leaked" across projects, or something
like that.

Could someone elaborate on why this was recommended?  It seems to me that
it'd be better for all projects to just use the "bug" tag for consistency.

If you want to get all bugs in a specific project it would be pretty easy to
search for stories with a tag of "bug" and a project of "X".


Because stories are a cross-project concept and tags are applied to
the story, it's possible for a story with tasks for both
openstack/nova and openstack/cinder projects to represent a bug for
one and a new feature for the other. If they're tagged nova-bug and
cinder-feature then that would allow them to match the queries those
teams have defined for their worklists, boards, et cetera. It's of
course possible to just hand-wave that these intersections are rare
enough to ignore and go ahead and use generic story tags, but the
recommendation is there to allow teams to avoid disagreements in
such cases.


Would it be possible to automate that tagging on import? Essentially 
tag every lp bug that is not wishlist with $PROJECT-bug and wishlists 
with $PROJECT-feature. Otherwise someone has to go through and 
re-categorize everything in Storyboard.


I don't know if everyone would want that, but if this is the 
recommended practice I would want it for Oslo.


I would think this is a common want at least for the projects in the 
central box in the project map [1]


-- Tom Barron (tbarron)

[1] https://www.openstack.org/openstack-map



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Doug Hellmann
It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-26 Thread Petr Kovar
On Sat, 22 Sep 2018 23:32:06 +0900
"Ian Y. Choi"  wrote:

> Thanks a lot all for such nomination & agreement!
> 
> I would like to do my best after I become doc-core as like what I 
> current do,
> although I still need the help from so many kind, energetic, and 
> enthusiastic OpenStack contributors and core members
> on OpenStack documentation and so many projects.

Thank you, Ian. Just updated the perms, congrats on your new role!

Best,
pk
 

> Melvin Hillsman wrote on 9/21/2018 5:31 AM:
> > ++
> >
> > On Thu, Sep 20, 2018 at 3:11 PM Frank Kloeker  > > wrote:
> >
> > Am 2018-09-19 20:54, schrieb Andreas Jaeger:
> > > On 2018-09-19 20:50, Petr Kovar wrote:
> > >> Hi all,
> > >>
> > >> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
> > >> membership in the openstack-doc-core team. I think Ian doesn't
> > need an
> > >> introduction, he's been around for a while, recently being deeply
> > >> involved
> > >> in infra work to get us robust support for project team docs
> > >> translation and
> > >> PDF builds.
> > >>
> > >> Having Ian on the core team will also strengthen our
> > integration with
> > >> the i18n community.
> > >>
> > >> Please let the ML know should you have any objections.
> > >
> > > The opposite ;), heartly agree with adding him,
> > >
> > > Andreas
> >
> > ++
> >
> > Frank
> >
> >
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > -- 
> > Kind regards,
> >
> > Melvin Hillsman
> > mrhills...@gmail.com 
> > mobile: (832) 264-2646
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Petr Kovar
Documentation Program Manager | Red Hat Virtualization
Customer Content Services | Red Hat Czech s.r.o.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables

2018-09-26 Thread Jeremy Stanley
On 2018-09-26 09:22:30 -0500 (-0500), Sean McGinnis wrote:
[...]
> It tests the release automation machinery to identify problems
> before the RC and final release crunch time.
[...]

More to the point, it helped spot changes to projects which made it
impossible to generate and publish their release artifacts. Coverage
has improved for finding these issues before merging now, as well as
in flight tests on proposed releases, making the risk lower than it
used to be.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] why use different "bug" tags per project?

2018-09-26 Thread Ben Nemec



On 9/26/18 8:20 AM, Jeremy Stanley wrote:

On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote:

At the PTG, it was suggested that each project should tag their bugs with
"-bug" to avoid tags being "leaked" across projects, or something
like that.

Could someone elaborate on why this was recommended?  It seems to me that
it'd be better for all projects to just use the "bug" tag for consistency.

If you want to get all bugs in a specific project it would be pretty easy to
search for stories with a tag of "bug" and a project of "X".


Because stories are a cross-project concept and tags are applied to
the story, it's possible for a story with tasks for both
openstack/nova and openstack/cinder projects to represent a bug for
one and a new feature for the other. If they're tagged nova-bug and
cinder-feature then that would allow them to match the queries those
teams have defined for their worklists, boards, et cetera. It's of
course possible to just hand-wave that these intersections are rare
enough to ignore and go ahead and use generic story tags, but the
recommendation is there to allow teams to avoid disagreements in
such cases.


Would it be possible to automate that tagging on import? Essentially tag 
every lp bug that is not wishlist with $PROJECT-bug and wishlists with 
$PROJECT-feature. Otherwise someone has to go through and re-categorize 
everything in Storyboard.


I don't know if everyone would want that, but if this is the recommended 
practice I would want it for Oslo.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Prioritization?

2018-09-26 Thread Ben Nemec



On 9/25/18 3:29 AM, Thierry Carrez wrote:

Doug Hellmann wrote:

I think we need to reconsider that position if it's going to block
adoption. I think Ben's case is an excellent second example of where
having a field to hold some sort of priority value would be useful.


Absence of priorities was an initial design choice[1] based on the fact 
that in an open collaboration every group, team, organization has their 
own views on what the priority of a story is, so worklist and tags are 
better ways to capture that. Also they don't really work unless you 
triage everything. And then nobody really looks at them to prioritize 
their work, so they are high cost for little benefit.


So was the storyboard implementation based on the rant section then? 
Because I don't know that I agree with/understand some of the assertions 
there.


First, don't we _need_ to triage everything? At least on some minimal 
level? Not looking at new bugs at all seems like the way you end up with 
a security bug open for two years *ahem*. Not that I would know anything 
about that (it's been fixed now, FTR).


I'm also not sure I agree with the statement that setting a priority for 
a blueprint is useless. Prioritizing feature work is something everyone 
needs to do these days since no team has enough people to implement 
every proposed feature. Maybe the proposal is for everyone to adopt 
Nova-style runways, but I'm not sure how well that works for smaller 
projects where many of the developers are only able to devote part of 
their time to it. Setting a time window for a feature to merge or get 
kicked to the back of line would be problematic for me.


That section also ends with an unanswered question regarding how to do 
bug triage in this model, which I guess is the thing we're trying to 
address with this discussion.




That said, it definitely creates friction, because alternatives are less 
convenient / visible, and it's not how other tools work... so the 
"right" answer here may not be the "best" answer.


[1] https://wiki.openstack.org/wiki/StoryBoard/Priority



Also, like it or not there is technical debt we're carrying over here. 
All of our bug triage up to this point has been based on launchpad 
priorities, and as I think I noted elsewhere it would be a big step 
backward to completely throw that out. Whatever model for prioritization 
and triage that we choose, I feel like there needs to be a reasonable 
migration path for the thousands of existing triaged lp bugs in OpenStack.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread James Penick
Hey Colleen,

>This sounds like it is based on the customizations done at Oath, which to
my recollection did not use the actual federation implementation in
keystone due to its reliance on Athenz (I think?) as an identity manager.
Something similar can be accomplished in standard keystone with the mapping
API in keystone which can cause dynamic generation of a shadow user,
project and role assignments.

You're correct, this was more about the general design of asymmetrical
token based authentication rather that our exact implementation with
Athenz. We didn't use the shadow users because Athenz authentication in our
implementation is done via an 'ntoken'  which is Athenz' older method for
identification, so it was it more straightforward for us to resurrect the
PKI driver. The new way is via mTLS, where the user can identify themselves
via a client cert. I imagine we'll need to move our implementation to use
shadow users as a part of that change.

>We have historically pushed back hard against allowing setting a project
ID via the API, though I can see predictable-but-not-settable as less
problematic.

Yup, predictable-but-not-settable is what we need. Basically as long as the
uuid is a hash of the string, we're good. I definitely don't want to be
able to set a user ID or project ID via API, because of the security and
operability problems that could arise. In my mind this would just be a
config setting.

>One of the use cases from the past was being able to use the same token in
different regions, which is problematic from a security perspective. Is
that that idea here? Or could someone provide more details on why this is
needed?

Well, sorta. As far as we're concerned you can get authenticate to keystone
in each region independently using your credential from the IdP. Our use
cases are more about simplifying federation of other systems, like Glance.
Say I create an image and a member list for that image. I'd like to be able
to copy that image *and* all of its metadata straight across to another
cluster and have things Just Work without needing to look up and resolve
the new UUIDs on the new cluster.

However, for deployers who wish to use Keystone as their IdP, then in that
case they'll need to use that keystone credential to establish a credential
in the keystone cluster in that region.

-James

On Wed, Sep 26, 2018 at 2:10 AM Colleen Murphy  wrote:

> Thanks for the summary, Ildiko. I have some questions inline.
>
> On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>
> 
>
> >
> > We agreed to prefer federation for Keystone and came up with two work
> > items to cover missing functionality:
> >
> > * Keystone to trust a token from an ID Provider master and when the auth
> > method is called, perform an idempotent creation of the user, project
> > and role assignments according to the assertions made in the token
>
> This sounds like it is based on the customizations done at Oath, which to
> my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
>
> > * Keystone should support the creation of users and projects with
> > predictable UUIDs (eg.: hash of the name of the users and projects).
> > This greatly simplifies Image federation and telemetry gathering
>
> I was in and out of the room and don't recall this discussion exactly. We
> have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Were there any volunteers to help write up specs and work on the
> implementations in keystone?
>
> 
>
> Colleen (cmurphy)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard][oslo] Fewer stories than bugs?

2018-09-26 Thread Doug Hellmann
Ben Nemec  writes:

> Okay, I found a few bugs that are in launchpad but not storyboard:
>
> https://bugs.launchpad.net/python-stevedore/+bug/1784823
> https://bugs.launchpad.net/pbr/+bug/1777625
> https://bugs.launchpad.net/taskflow/+bug/1756520
> https://bugs.launchpad.net/pbr/+bug/1742809
>
> The latter three are all in an incomplete state, so maybe that's being 
> ignored by the migration script? The first appears to be a completely 
> missing project.  None of the stevedore bugs I've spot checked are in 
> storyboard. Maybe it has to do with the fact that the project name is 
> stevedore but the bug link is python-stevedore? I'm not sure why that 
> is, but there may be something a little weird going on with that
> project.

The name "stevedore" was taking on LP when I registered that project, so
I had to use an alternative name.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard][oslo] Fewer stories than bugs?

2018-09-26 Thread Ben Nemec

Okay, I found a few bugs that are in launchpad but not storyboard:

https://bugs.launchpad.net/python-stevedore/+bug/1784823
https://bugs.launchpad.net/pbr/+bug/1777625
https://bugs.launchpad.net/taskflow/+bug/1756520
https://bugs.launchpad.net/pbr/+bug/1742809

The latter three are all in an incomplete state, so maybe that's being 
ignored by the migration script? The first appears to be a completely 
missing project.  None of the stevedore bugs I've spot checked are in 
storyboard. Maybe it has to do with the fact that the project name is 
stevedore but the bug link is python-stevedore? I'm not sure why that 
is, but there may be something a little weird going on with that project.


On 9/25/18 1:22 PM, Kendall Nelson wrote:

Hey Ben,

I am looking into it! I am guessing that some of the discrepancy is bugs 
being filed after I did the migration. I might also have missed one of 
the launchpad projects. I will redo the migrations today and we can see 
if the numbers match up after (or are at least much closer).


We've never had an issue with stories not being created and there were 
no errors in any of the runs I did of the migration scripts. I'm 
guessing PEBKAC :)


-Kendall (diablo_rojo)

On Mon, Sep 24, 2018 at 2:38 PM Ben Nemec > wrote:


This is a more oslo-specific (maybe) question that came out of the test
migration. I noticed that launchpad is reporting 326 open bugs across
the Oslo projects, but in Storyboard there are only 266 stories
created.
While I'm totally onboard with reducing our bug backlog, I'm curious
why
that is the case. I'm speculating that maybe Launchpad counts bugs that
affect multiple Oslo projects as multiple bugs whereas Storyboard is
counting them as a single story?

I think we were also going to skip
https://bugs.launchpad.net/openstack-infra which for some reason
appeared in the oslo group, but that's only two bugs so it doesn't
account for anywhere near the full difference.

Mostly I just want to make sure we didn't miss something. I'm hoping
this is a known behavior and we don't have to start comparing bug lists
to find the difference. :-)

Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Lance Bragstad
For those who may be following along and are not familiar with what we mean
by federated auto-provisioning [0].

[0]
https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning

On Wed, Sep 26, 2018 at 9:06 AM Morgan Fainberg 
wrote:

> This discussion was also not about user assigned IDs, but predictable IDs
> with the auto provisioning. We still want it to be something keystone
> controls (locally). It might be hash domain ID and value from assertion (
> similar.to the LDAP user ID generator). As long as within an environment,
> the IDs are predictable when auto provisioning via federation, we should be
> good. And the problem of the totally unknown ID until provisioning could be
> made less of an issue for someone working within a massively federated edge
> environment.
>
> I don't want user/explicit admin set IDs.
>
> On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:
>
>> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
>> > Thanks for the summary, Ildiko. I have some questions inline.
>> >
>> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>> >
>> > 
>> >
>> >>
>> >> We agreed to prefer federation for Keystone and came up with two work
>> >> items to cover missing functionality:
>> >>
>> >> * Keystone to trust a token from an ID Provider master and when the
>> auth
>> >> method is called, perform an idempotent creation of the user, project
>> >> and role assignments according to the assertions made in the token
>> >
>> > This sounds like it is based on the customizations done at Oath, which
>> to my recollection did not use the actual federation implementation in
>> keystone due to its reliance on Athenz (I think?) as an identity manager.
>> Something similar can be accomplished in standard keystone with the mapping
>> API in keystone which can cause dynamic generation of a shadow user,
>> project and role assignments.
>> >
>> >> * Keystone should support the creation of users and projects with
>> >> predictable UUIDs (eg.: hash of the name of the users and projects).
>> >> This greatly simplifies Image federation and telemetry gathering
>> >
>> > I was in and out of the room and don't recall this discussion exactly.
>> We have historically pushed back hard against allowing setting a project ID
>> via the API, though I can see predictable-but-not-settable as less
>> problematic. One of the use cases from the past was being able to use the
>> same token in different regions, which is problematic from a security
>> perspective. Is that that idea here? Or could someone provide more details
>> on why this is needed?
>>
>> Hi Colleen,
>>
>> I wasn't in the room for this conversation either, but I believe the
>> "use case" wanted here is mostly a convenience one. If the edge
>> deployment is composed of hundreds of small Keystone installations and
>> you have a user (e.g. an NFV MANO user) which should have visibility
>> across all of those Keystone installations, it becomes a hassle to need
>> to remember (or in the case of headless users, store some lookup of) all
>> the different tenant and user UUIDs for what is essentially the same
>> user across all of those Keystone installations.
>>
>> I'd argue that as long as it's possible to create a Keystone tenant and
>> user with a unique name within a deployment, and as long as it's
>> possible to authenticate using the tenant and user *name* (i.e. not the
>> UUID), then this isn't too big of a problem. However, I do know that a
>> bunch of scripts and external tools rely on setting the tenant and/or
>> user via the UUID values and not the names, so that might be where this
>> feature request is coming from.
>>
>> Hope that makes sense?
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables

2018-09-26 Thread Doug Hellmann
Jeremy Stanley  writes:

> On 2018-09-26 09:22:30 -0500 (-0500), Sean McGinnis wrote:
> [...]
>> It tests the release automation machinery to identify problems
>> before the RC and final release crunch time.
> [...]
>
> More to the point, it helped spot changes to projects which made it
> impossible to generate and publish their release artifacts. Coverage
> has improved for finding these issues before merging now, as well as
> in flight tests on proposed releases, making the risk lower than it
> used to be.

The new set of packaging jobs that are part of the
publish-to-pypi-python3 project template also include a check queue job
that runs when any of the packaging files (setup.*, README.rst, etc.)
are modified. That should give us an even earlier warning of any
packaging failures.

Since all python projects will soon use the same release jobs, we will
know that the job is working in general based on other releases
(including more liberal use of our test repository before big
deadlines).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Bug in documentation?

2018-09-26 Thread Zane Bitter

On 26/09/18 12:02 PM, Postlbauer, Juan wrote:

Hi everyone:

I see that heat doc 
https://docs.openstack.org/heat/rocky/template_guide/openstack.html#OS::Nova::Flavor 
states that


*ram¶ 
*


Memory in MB for the flavor.

*disk¶ 
*


Size of local disk in GB.

That would be 1000*1000 for ram and 1000*1000*1000 for disk.

But Nova doc 
https://developer.openstack.org/api-ref/compute/#create-flavor states that:


ram



body



integer



The amount of RAM a flavor has, in MiB.

disk



body



integer



The size of the root disk that will be created in GiB.

That would be 1024*1024 for ram and 1024*1024*1024 for disk. Which, at 
least for ram, makes much more sense to me.


Is this a typo in Heat documentation?


No, but it's ambiguous in a way that MiB/GiB would not be. Feel free to 
submit a patch.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-09-26 Thread Slawomir Kaplonski
Hi,

> Wiadomość napisana przez Niket Agrawal  w dniu 
> 26.09.2018, o godz. 18:11:
> 
> Hello,
> 
> I have a question regarding the Ryu integration in Openstack. By default, the 
> openvswitch bridges (br-int, br-tun and br-ex) are registered to a controller 
> running on 127.0.0.1 and port 6633. The output of ovs-vsctl get-manager is 
> ptcp:127.0.0.1:6640. This is noticed on the nova compute node. However there 
> is a different instance of the same Ryu controller running on the neutron 
> gateway as well and the three openvswitch bridges (br-int, br-tun and br-ex) 
> are registered to this instance of Ryu controller. If I stop 
> neutron-openvswitch agent on the nova compute node, the bridges there are no 
> longer connected to the controller, but the bridges in the neutron gateway 
> continue to remain connected to the controller. Only when I stop the neutron 
> openvswitch agent in the neutron gateway as well, the bridges there get 
> disconnected. 
> 
> I'm unable to find where in the Openstack code I can access this 
> implementation, because I intend to make a few tweaks to this architecture 
> which is present currently. Also, I'd like to know which app is the Ryu SDN 
> controller running by default at the moment. I feel the information in the 
> code can help me find it too.

Ryu app is started by neutron-openvswitch-agent in: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
Is it what You are looking for?

> 
> Regards,
> Niket
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Tim Bell

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Prioritization?

2018-09-26 Thread Kendall Nelson
On Wed, Sep 26, 2018 at 9:50 AM Ben Nemec  wrote:

>
>
> On 9/25/18 3:29 AM, Thierry Carrez wrote:
> > Doug Hellmann wrote:
> >> I think we need to reconsider that position if it's going to block
> >> adoption. I think Ben's case is an excellent second example of where
> >> having a field to hold some sort of priority value would be useful.
> >
> > Absence of priorities was an initial design choice[1] based on the fact
> > that in an open collaboration every group, team, organization has their
> > own views on what the priority of a story is, so worklist and tags are
> > better ways to capture that. Also they don't really work unless you
> > triage everything. And then nobody really looks at them to prioritize
> > their work, so they are high cost for little benefit.
>
> So was the storyboard implementation based on the rant section then?
> Because I don't know that I agree with/understand some of the assertions
> there.
>
> First, don't we _need_ to triage everything? At least on some minimal
> level? Not looking at new bugs at all seems like the way you end up with
> a security bug open for two years *ahem*. Not that I would know anything
> about that (it's been fixed now, FTR).
>
> I'm also not sure I agree with the statement that setting a priority for
> a blueprint is useless. Prioritizing feature work is something everyone
> needs to do these days since no team has enough people to implement
> every proposed feature. Maybe the proposal is for everyone to adopt
> Nova-style runways, but I'm not sure how well that works for smaller
> projects where many of the developers are only able to devote part of
> their time to it. Setting a time window for a feature to merge or get
> kicked to the back of line would be problematic for me.
>
> That section also ends with an unanswered question regarding how to do
> bug triage in this model, which I guess is the thing we're trying to
> address with this discussion.
>
> >
> > That said, it definitely creates friction, because alternatives are less
> > convenient / visible, and it's not how other tools work... so the
> > "right" answer here may not be the "best" answer.
> >
> > [1] https://wiki.openstack.org/wiki/StoryBoard/Priority
> >
>
> Also, like it or not there is technical debt we're carrying over here.
> All of our bug triage up to this point has been based on launchpad
> priorities, and as I think I noted elsewhere it would be a big step
> backward to completely throw that out. Whatever model for prioritization
> and triage that we choose, I feel like there needs to be a reasonable
> migration path for the thousands of existing triaged lp bugs in OpenStack.
>

The information is being migrated[1], we just don't expose it in the
webclient. You could still access the info via the API.


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-Kendall (diablo_rojo)

[1]
https://github.com/openstack-infra/storyboard/blob/master/storyboard/migrate/launchpad/writer.py#L183
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Arkady.Kanevsky
+1

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Wednesday, September 26, 2018 1:56 PM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting 
goal selection for T series


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.



Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Matt Riedemann

On 9/26/2018 3:01 PM, Doug Hellmann wrote:

Monty Taylor  writes:


On 09/26/2018 01:55 PM, Tim Bell wrote:

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose this 
for a T/U series goal.


I would personally like to thank the person that put that goal in the 
etherpad...they must have had amazing foresight and unparalleled modesty.




To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

++

It's also worth noting that we're REALLY close to a 1.0 of openstacksdk
(all the patches are in flight, we just need to land them) - and once
we've got that we'll be in a position to start shifting
python-openstackclient to using openstacksdk instead of python-*client.

This will have the additional benefit that, once we've migrated CLIs to
python-openstackclient as per this goal, and once we've migrated
openstackclient itself to openstacksdk, the number of different
libraries one needs to install to interact with openstack will be
_dramatically_  lower.

Would it be useful to have the SDK work in OSC as a prerequisite to the
goal work? I would hate to have folks have to write a bunch of things
twice.

Do we have any sort of list of which projects aren't currently being
handled by OSC? If we could get some help building such a list, that
would help us understand the scope of the work.


I started documenting the compute API gaps in OSC last release [1]. It's 
a big gap and needs a lot of work, even for existing CLIs (the cold/live 
migration CLIs in OSC are a mess, and you can't even boot from volume 
where nova creates the volume for you). That's also why I put something 
into the etherpad about the OSC core team even being able to handle an 
onslaught of changes for a goal like this.




As far as admin features, I think we've been hesitant to add those to
OSC in the past, but I can see the value. I wonder if having them in a
separate library makes sense? Or is it better to have commands in the
tool that regular users can't access, and just report the permission
error when they try to run the command?


I thought the same, and we talked about this at the Austin summit, but 
OSC is inconsistent about this (you can live migrate a server but you 
can't evacuate it - there is no CLI for evacuation). It also came up at 
the Stein PTG with Dean in the nova room giving us some direction. [2] I 
believe the summary of that discussion was:


a) to deal with the core team sprawl, we could move the compute stuff 
out of python-openstackclient and into an osc-compute plugin (like the 
osc-placement plugin for the placement service); then we could create a 
new core team which would have python-openstackclient-core as a superset


b) Dean suggested that we close the compute API gaps in the SDK first, 
but that could take a long time as well...but it sounded like we could 
use the SDK for things that existed in the SDK and use novaclient for 
things that didn't yet exist in the SDK


This might be a candidate for one of these multi-release goals that the 
TC started talking about at the Stein PTG. I could see something like 
this being a goal for Stein:


"Each project owns its own osc- plugin for OSC CLIs"

That deals with the core team and sprawl issue, especially with stevemar 
being gone and dtroyer being distracted by shiny x-men bird related 
things. That also seems relatively manageable for all projects to do in 
a single release. Having a single-release goal of "close all gaps across 
all service types" is going to be extremely tough for any older projects 
that had CLIs before OSC was created 

Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Fox, Kevin M
+1 :)

From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, September 26, 2018 11:55 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting 
goal selection for T series

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Monty Taylor

On 09/26/2018 01:55 PM, Tim Bell wrote:


Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose this 
for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.


++

It's also worth noting that we're REALLY close to a 1.0 of openstacksdk 
(all the patches are in flight, we just need to land them) - and once 
we've got that we'll be in a position to start shifting 
python-openstackclient to using openstacksdk instead of python-*client.


This will have the additional benefit that, once we've migrated CLIs to 
python-openstackclient as per this goal, and once we've migrated 
openstackclient itself to openstacksdk, the number of different 
libraries one needs to install to interact with openstack will be 
_dramatically_ lower.



-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

 It's time to start thinking about community-wide goals for the T series.
 
 We use community-wide goals to achieve visible common changes, push for

 basic levels of consistency and user experience, and efficiently improve
 certain areas where technical debt payments have become too high -
 across all OpenStack projects. Community input is important to ensure
 that the TC makes good decisions about the goals. We need to consider
 the timing, cycle length, priority, and feasibility of the suggested
 goals.
 
 If you are interested in proposing a goal, please make sure that before

 the summit it is described in the tracking etherpad [1] and that you
 have started a mailing list thread on the openstack-dev list about the
 proposal so that everyone in the forum session [2] has an opportunity to
 consider the details.  The forum session is only one step in the
 selection process. See [3] for more details.
 
 Doug
 
 [1] https://etherpad.openstack.org/p/community-goals

 [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
 [3] https://governance.openstack.org/tc/goals/index.html
 
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Doug Hellmann
Monty Taylor  writes:

> On 09/26/2018 01:55 PM, Tim Bell wrote:
>> 
>> Doug,
>> 
>> Thanks for raising this. I'd like to highlight the goal "Finish moving 
>> legacy python-*client CLIs to python-openstackclient" from the etherpad and 
>> propose this for a T/U series goal.
>> 
>> To give it some context and the motivation:
>> 
>> At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
>> extensive end user facing documentation which explains how to use the 
>> OpenStack along with CERN specific features (such as workflows for 
>> requesting projects/quotas/etc.).
>> 
>> One regular problem we come across is that the end user experience is 
>> inconsistent. In some cases, we find projects which are not covered by the 
>> unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
>> the function which require the native project client.
>> 
>> I would strongly support a goal which targets
>> 
>> - All new projects should have the end user facing functionality fully 
>> exposed via the unified client
>> - Existing projects should aim to close the gap within 'N' cycles (N to be 
>> defined)
>> - Many administrator actions would also benefit from integration (reader 
>> roles are end users too so list and show need to be covered too)
>> - Users should be able to use a single openrc for all interactions with the 
>> cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)
>> 
>> The end user perception of a solution will be greatly enhanced by a single 
>> command line tool with consistent syntax and authentication framework.
>> 
>> It may be a multi-release goal but it would really benefit the cloud 
>> consumers and I feel that goals should include this audience also.
>
> ++
>
> It's also worth noting that we're REALLY close to a 1.0 of openstacksdk 
> (all the patches are in flight, we just need to land them) - and once 
> we've got that we'll be in a position to start shifting 
> python-openstackclient to using openstacksdk instead of python-*client.
>
> This will have the additional benefit that, once we've migrated CLIs to 
> python-openstackclient as per this goal, and once we've migrated 
> openstackclient itself to openstacksdk, the number of different 
> libraries one needs to install to interact with openstack will be 
> _dramatically_ lower.

Would it be useful to have the SDK work in OSC as a prerequisite to the
goal work? I would hate to have folks have to write a bunch of things
twice.

Do we have any sort of list of which projects aren't currently being
handled by OSC? If we could get some help building such a list, that
would help us understand the scope of the work.

As far as admin features, I think we've been hesitant to add those to
OSC in the past, but I can see the value. I wonder if having them in a
separate library makes sense? Or is it better to have commands in the
tool that regular users can't access, and just report the permission
error when they try to run the command?

Doug

>
>> -Original Message-
>> From: Doug Hellmann 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Wednesday, 26 September 2018 at 18:00
>> To: openstack-dev , openstack-operators 
>> , openstack-sigs 
>> 
>> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T  
>> series
>> 
>>  It's time to start thinking about community-wide goals for the T series.
>>  
>>  We use community-wide goals to achieve visible common changes, push for
>>  basic levels of consistency and user experience, and efficiently improve
>>  certain areas where technical debt payments have become too high -
>>  across all OpenStack projects. Community input is important to ensure
>>  that the TC makes good decisions about the goals. We need to consider
>>  the timing, cycle length, priority, and feasibility of the suggested
>>  goals.
>>  
>>  If you are interested in proposing a goal, please make sure that before
>>  the summit it is described in the tracking etherpad [1] and that you
>>  have started a mailing list thread on the openstack-dev list about the
>>  proposal so that everyone in the forum session [2] has an opportunity to
>>  consider the details.  The forum session is only one step in the
>>  selection process. See [3] for more details.
>>  
>>  Doug
>>  
>>  [1] https://etherpad.openstack.org/p/community-goals
>>  [2] 
>> https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
>>  [3] https://governance.openstack.org/tc/goals/index.html
>>  
>>  
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>  
>> 
>> 

Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Tom Barron

On 26/09/18 18:55 +, Tim Bell wrote:


Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose this 
for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila).


Tim,

First, I endorse this goal.

That said, lack of coverage of Manila in the OpenStack client was 
articulated as a need (by CERN and others) during the Vancouver Forum.


At the recent Manila PTG we set addressing this technical debt as a 
Stein cycle goal, as well as OpenStack SDK integration for Manila.


-- Tom Barron (tbarron)


In other cases, there are subsets of the function which require the native 
project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

   It's time to start thinking about community-wide goals for the T series.

   We use community-wide goals to achieve visible common changes, push for
   basic levels of consistency and user experience, and efficiently improve
   certain areas where technical debt payments have become too high -
   across all OpenStack projects. Community input is important to ensure
   that the TC makes good decisions about the goals. We need to consider
   the timing, cycle length, priority, and feasibility of the suggested
   goals.

   If you are interested in proposing a goal, please make sure that before
   the summit it is described in the tracking etherpad [1] and that you
   have started a mailing list thread on the openstack-dev list about the
   proposal so that everyone in the forum session [2] has an opportunity to
   consider the details.  The forum session is only one step in the
   selection process. See [3] for more details.

   Doug

   [1] https://etherpad.openstack.org/p/community-goals
   [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
   [3] https://governance.openstack.org/tc/goals/index.html

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard][oslo] Fewer stories than bugs?

2018-09-26 Thread Kendall Nelson
So I 100% messed up that migration. Looking back at my terminal history I
migrated 'stevedore' instead of 'python-stevedore'. I migrated the correct
project now and all should be well.

Some napkin math says that the number oslo bugs in lp minus oslo-incubator
(it was concluded we didn't need to migrate that one) matches the number of
stories in the oslo project group in StoryBoard.

Sorry for the confusion!

-Kendall (diablo_rojo)

On Wed, Sep 26, 2018 at 11:30 AM Doug Hellmann 
wrote:

> Ben Nemec  writes:
>
> > Okay, I found a few bugs that are in launchpad but not storyboard:
> >
> > https://bugs.launchpad.net/python-stevedore/+bug/1784823
> > https://bugs.launchpad.net/pbr/+bug/1777625
> > https://bugs.launchpad.net/taskflow/+bug/1756520
> > https://bugs.launchpad.net/pbr/+bug/1742809
> >
> > The latter three are all in an incomplete state, so maybe that's being
> > ignored by the migration script? The first appears to be a completely
> > missing project.  None of the stevedore bugs I've spot checked are in
> > storyboard. Maybe it has to do with the fact that the project name is
> > stevedore but the bug link is python-stevedore? I'm not sure why that
> > is, but there may be something a little weird going on with that
> > project.
>
> The name "stevedore" was taking on LP when I registered that project, so
> I had to use an alternative name.
>
> Doug
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Mathieu Gagné
+1 Yes please!

--
Mathieu

On Wed, Sep 26, 2018 at 2:56 PM Tim Bell  wrote:
>
>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
> python-*client CLIs to python-openstackclient" from the etherpad and propose 
> this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
> extensive end user facing documentation which explains how to use the 
> OpenStack along with CERN specific features (such as workflows for requesting 
> projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is 
> inconsistent. In some cases, we find projects which are not covered by the 
> unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
> the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully 
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be 
> defined)
> - Many administrator actions would also benefit from integration (reader 
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with the 
> cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)
>
> The end user perception of a solution will be greatly enhanced by a single 
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud 
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev , openstack-operators 
> , openstack-sigs 
> 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T   
>   series
>
> It's time to start thinking about community-wide goals for the T series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
> [3] https://governance.openstack.org/tc/goals/index.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Dean Troyer
On Wed, Sep 26, 2018 at 3:01 PM, Doug Hellmann  wrote:
> Would it be useful to have the SDK work in OSC as a prerequisite to the
> goal work? I would hate to have folks have to write a bunch of things
> twice.

I don't think this is necessary, once we have the auth and service
discovery/version negotiation plumbing in OSC properly new things can
be done in OSC without having to wait for conversion.  Any of the
existing client libs that can utilize an adapter form the SDK makes
this even simpler for conversion.

> Do we have any sort of list of which projects aren't currently being
> handled by OSC? If we could get some help building such a list, that
> would help us understand the scope of the work.

We have asked plugins to maintain their presence in the OSC docs [0],
there are three listed there as not having plugins but I wouldn't
consider that exhaustive.  We also ask them to list their resource
names in [1] to reserve the name to help prevent name collisions.

> As far as admin features, I think we've been hesitant to add those to
> OSC in the past, but I can see the value. I wonder if having them in a
> separate library makes sense? Or is it better to have commands in the
> tool that regular users can't access, and just report the permission
> error when they try to run the command?

The admin/non-admin distinction has not been a hard rule in most
places, we have plenty of admin commands in OSC.  At times we have
talked about pulling those out of the OSC repo into an admin plugin, I
haven't encouraged that as I am not convinced of the value enough to
put aside other things to do it.  Due to configurable policy it also
is not clear what to include and exclude, to me it is a better user
experience, and more interoperable between cloud deployments, to
correctly handle when admin/policy refuses to do something and let the
user sort it out as necessary.

[0] 
https://docs.openstack.org/python-openstackclient/latest/contributor/plugins.html#adoption
[1] 
https://docs.openstack.org/python-openstackclient/latest/cli/commands.html#plugin-objects

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 7 update

2018-09-26 Thread Doug Hellmann
Doug Hellmann  writes:

> == Things We Learned This Week ==
>
> When we updated the tox.ini settings for jobs like pep8 and release
> notes early in the Rocky session we only touched some of the official
> repositories. I'll be working on making a list of the ones we missed so
> we can update them by the end of Stein.

I see quite a few repositories with tox settings out of date (about 350,
see below). Given the volume, I'm going to prepare the patches and
propose them a few at a time over the next couple of weeks.

As background, each repo needs a patch (to master only) that looks like
[1]. It needs to set the "basepython" parameter in all of the relevant
tox environments to "python3" to force using python 3. It is most
important to set the docs, linters, pep8, releasenotes,
lower-constraints and venv environments, but we also wanted to include
bindep and cover if they are present.

The patches I prepare will update all of those environments.  We should
also include any other environments that run jobs, but teams may want to
duplicate some (and add the relevant jobs) rather than changing all of
the functional test jobs. As with the other functional job changes, I
will leave those up to the project teams.

As the commit message on [1] explains, we are using "python3" on
purpose:

* We do not want to specify a minor version number, because we do not
  want to have to update the file every time we upgrade python.

* We do not want to set the override once in testenv, because that
  breaks the more specific versions used in default environments like
  py35 and py36 (at least under older versions of tox).

In case you want to watch for them, all of the new patches will use "fix
tox python3 overrides" as the first line of the commit message (the
tracking tool looks for that string).

Doug

[1] https://review.openstack.org/#/c/573355/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-09-26 Thread Niket Agrawal
Hi,

Thanks for your reply. Is there a way to access the code that is running in
the app to see what is the logic implemented in the app?

Regards,
Niket

On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski 
wrote:

> Hi,
>
> > Wiadomość napisana przez Niket Agrawal  w dniu
> 26.09.2018, o godz. 18:11:
> >
> > Hello,
> >
> > I have a question regarding the Ryu integration in Openstack. By
> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered
> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute
> node. However there is a different instance of the same Ryu controller
> running on the neutron gateway as well and the three openvswitch bridges
> (br-int, br-tun and br-ex) are registered to this instance of Ryu
> controller. If I stop neutron-openvswitch agent on the nova compute node,
> the bridges there are no longer connected to the controller, but the
> bridges in the neutron gateway continue to remain connected to the
> controller. Only when I stop the neutron openvswitch agent in the neutron
> gateway as well, the bridges there get disconnected.
> >
> > I'm unable to find where in the Openstack code I can access this
> implementation, because I intend to make a few tweaks to this architecture
> which is present currently. Also, I'd like to know which app is the Ryu SDN
> controller running by default at the moment. I feel the information in the
> code can help me find it too.
>
> Ryu app is started by neutron-openvswitch-agent in:
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
> Is it what You are looking for?
>
> >
> > Regards,
> > Niket
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Rochelle Grober
Oh, very definitely +1000



--
Rochelle Grober Rochelle Grober
M: +1-6508889722(preferred)
E: rochelle.gro...@huawei.com
2012实验室-硅谷研究所技术规划及合作部
2012 Laboratories-Silicon Valley Technology Planning & 
Cooperation,Silicon Valley Research Center
From:Mathieu Gagné
To:openstack-s...@lists.openstack.org,
Cc:OpenStack Development Mailing List (not for usage questions),OpenStack 
Operators,
Date:2018-09-26 12:41:24
Subject:Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal 
selection for T series

+1 Yes please!

--
Mathieu

On Wed, Sep 26, 2018 at 2:56 PM Tim Bell  wrote:
>
>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
> python-*client CLIs to python-openstackclient" from the etherpad and propose 
> this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
> extensive end user facing documentation which explains how to use the 
> OpenStack along with CERN specific features (such as workflows for requesting 
> projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is 
> inconsistent. In some cases, we find projects which are not covered by the 
> unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
> the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully 
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be 
> defined)
> - Many administrator actions would also benefit from integration (reader 
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with the 
> cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)
>
> The end user perception of a solution will be greatly enhanced by a single 
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud 
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev , openstack-operators 
> , openstack-sigs 
> 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T   
>   series
>
> It's time to start thinking about community-wide goals for the T series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
> [3] https://governance.openstack.org/tc/goals/index.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Monty Taylor

On 09/26/2018 04:12 PM, Dean Troyer wrote:

On Wed, Sep 26, 2018 at 3:01 PM, Doug Hellmann  wrote:

Would it be useful to have the SDK work in OSC as a prerequisite to the
goal work? I would hate to have folks have to write a bunch of things
twice.


I don't think this is necessary, once we have the auth and service
discovery/version negotiation plumbing in OSC properly new things can
be done in OSC without having to wait for conversion.  Any of the
existing client libs that can utilize an adapter form the SDK makes
this even simpler for conversion.


As one might expect, I agree with Dean. I don't think we need to wait on it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Dean Troyer
On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann  wrote:
> I started documenting the compute API gaps in OSC last release [1]. It's a
> big gap and needs a lot of work, even for existing CLIs (the cold/live
> migration CLIs in OSC are a mess, and you can't even boot from volume where
> nova creates the volume for you). That's also why I put something into the
> etherpad about the OSC core team even being able to handle an onslaught of
> changes for a goal like this.

The OSC core team is very thin, yes, it seems as though companies
don't like to spend money on client-facing things...I'll be in the
hall following this thread should anyone want to talk...

The migration commands are a mess, mostly because I got them wrong to
start with and we have only tried to patch it up, this is one area I
think we need to wipe clean and fix properly.  Yay! Major version
release!

> I thought the same, and we talked about this at the Austin summit, but OSC
> is inconsistent about this (you can live migrate a server but you can't
> evacuate it - there is no CLI for evacuation). It also came up at the Stein
> PTG with Dean in the nova room giving us some direction. [2] I believe the
> summary of that discussion was:

> a) to deal with the core team sprawl, we could move the compute stuff out of
> python-openstackclient and into an osc-compute plugin (like the
> osc-placement plugin for the placement service); then we could create a new
> core team which would have python-openstackclient-core as a superset

This is not my first choice but is not terrible either...

> b) Dean suggested that we close the compute API gaps in the SDK first, but
> that could take a long time as well...but it sounded like we could use the
> SDK for things that existed in the SDK and use novaclient for things that
> didn't yet exist in the SDK

Yup, this can be done in parallel.  The unit of decision for use sdk
vs use XXXclient lib is per-API call.  If the client lib can use an
SDK adapter/session it becomes even better.  I think the priority for
what to address first should be guided by complete gaps in coverage
and the need for microversion-driven changes.

> This might be a candidate for one of these multi-release goals that the TC
> started talking about at the Stein PTG. I could see something like this
> being a goal for Stein:
>
> "Each project owns its own osc- plugin for OSC CLIs"
>
> That deals with the core team and sprawl issue, especially with stevemar
> being gone and dtroyer being distracted by shiny x-men bird related things.
> That also seems relatively manageable for all projects to do in a single
> release. Having a single-release goal of "close all gaps across all service
> types" is going to be extremely tough for any older projects that had CLIs
> before OSC was created (nova/cinder/glance/keystone). For newer projects,
> like placement, it's not a problem because they never created any other CLI
> outside of OSC.

I think the major difficulty here is simply how to migrate users from
today state to future state in a reasonable manner.  If we could teach
OSC how to handle the same command being defined in multiple plugins
properly (hello entrypoints!) it could be much simpler as we could
start creating the new plugins and switch as the new command
implementations become available rather than having a hard cutover.

Or maybe the definition of OSC v4 is as above and we just work at it
until complete and cut over at the end.  Note that the current APIs
that are in-repo (Compute, Identity, Image, Network, Object, Volume)
are all implemented using the plugin structure, OSC v4 could start as
the breaking out of those without command changes (except new
migration commands!) and then the plugins all re-write and update at
their own tempo.  Dang, did I just deconstruct my project?

One thing I don't like about that is we just replace N client libs
with N (or more) plugins now and the number of things a user must
install doesn't go down.  I would like to hear from anyone who deals
with installing OSC if that is still a big deal or should I let go of
that worry?

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Stein PTG summary

2018-09-26 Thread melanie witt

Hello everybody,

I've written up a high level summary of the discussions we had at the 
PTG -- please feel free to reply to this thread to fill in anything I've 
missed.


We used our PTG etherpad:

https://etherpad.openstack.org/p/nova-ptg-stein

as an agenda and each topic we discussed was filled in with agreements, 
todos, and action items during the discussion. Please check out the 
etherpad to find notes relevant to your topics of interest, and reach 
out to us on IRC in #openstack-nova, on this mailing list with the 
[nova] tag, or by email to me if you have any questions.


Now, onto the high level summary:

Rocky retrospective
===
We began Wednesday morning with a retro on the Rocky cycle and captured 
notes on this etherpad:


https://etherpad.openstack.org/p/nova-rocky-retrospective

The runways review process was seen as overall positive and helped get 
some blueprint implementations merged that had languished in previous 
cycles. We agreed to continue with the runways process as-is in Stein 
and use it for approved blueprints. We did note that we could do better 
at queuing important approved work into runways, such as 
placement-related efforts that were not added to runways last cycle.


We discussed whether or not to move the spec freeze deadline back to 
milestone 1 (we used milestone 2 in Rocky). I have an action item to dig 
into whether or not the late breaking regressions we found at RC time:


https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo

were related to the later spec freeze at milestone 2. The question we 
want to answer is: did a later spec freeze lead to implementations 
landing later and resulting in the late detection of regressions at 
release candidate time?


Finally, we discussed a lot of things around project management, 
end-to-end themes for a cycle, and people generally not feeling they had 
clarity throughout the cycle about which efforts and blueprints were 
most important, aside from runways. We got a lot of work done in Rocky, 
but not as much of it materialized into user-facing features and 
improvements as it did in Queens. Last cycle, we had thought runways 
would capture what is a priority at any given time, but looking back, we 
determined it would be helpful if we still had over-arching 
goals/efforts/features written down for people to refer to throughout 
the cycle. We dove deeper into that discussion on Friday during the hour 
before lunch, where we came up with user-facing themes we aim to 
accomplish in the Stein cycle:


https://etherpad.openstack.org/p/nova-ptg-stein-priorities

Note that these are _not_ meant to preempt anything in runways, these 
are just 1) for my use as a project manager and 2) for everyone's use to 
keep a bigger picture of our goals for the cycle in their heads, to aid 
in their work and review outside of runways.


Themes
==
With that, I'll briefly mention the themes we came up with for the cycle:

* Compute nodes capable to upgrade and exist with nested resource 
providers for multiple GPU types


* Multi-cell operational enhancements: resilience to "down" or 
poor-performing cells and cross-cell instance migration


* Volume-backed user experience and API hardening: ability to specify 
volume type during boot-from-volume, detach/attach of root volume, and 
volume-backed rebuild


These are the user-visible features and functionality we aim to deliver 
and we'll keep tabs on these efforts throughout the cycle to keep them 
making progress.


Placement
=
As usual, we had a lot of discussions on placement-related topics, so 
I'll try to highlight the main things that stand out to me. Please see 
the "Placement" section of our PTG etherpad for all the details and 
additional topics we discussed.


We discussed the regression in behavior that happened when we removed 
the Aggregate[Core|Ram|Disk]Filters from the scheduler filters -- these 
filters allowed operators to set overcommit allocation ratios per 
aggregate instead of per host. We agreed on the importance of restoring 
this functionality and hashed out a concrete plan, with two specs needed 
to move forward:


https://review.openstack.org/552105
https://review.openstack.org/544683

The other standout discussions were around the placement extraction and 
closing the gaps in nested resource providers. For the placement 
extraction, we are focusing on full support of an upgrade from 
integrated placement => extracted placement, including assisting with 
making sure deployment tools like OpenStack-Ansible and TripleO are able 
to support the upgrade. For closing the gaps in nested resource 
providers, there are many parts to it that are documented on the 
aforementioned PTG etherpads. By closing the gaps with nested resource 
providers, we'll open the door for being able to support minimum 
bandwidth scheduling as well.


Cells
=
On cells, the main discussions were around resiliency "down" and 
poor-performing cells and 

Re: [openstack-dev] [nova] Stein PTG summary

2018-09-26 Thread Sylvain Bauza
Thanks for the recap email, Mel. Just a question inline for all the people
that were in the room by Wednesday.

Le jeu. 27 sept. 2018 à 00:10, melanie witt  a écrit :

> Hello everybody,
>
> I've written up a high level summary of the discussions we had at the
> PTG -- please feel free to reply to this thread to fill in anything I've
> missed.
>
> We used our PTG etherpad:
>
> https://etherpad.openstack.org/p/nova-ptg-stein
>
> as an agenda and each topic we discussed was filled in with agreements,
> todos, and action items during the discussion. Please check out the
> etherpad to find notes relevant to your topics of interest, and reach
> out to us on IRC in #openstack-nova, on this mailing list with the
> [nova] tag, or by email to me if you have any questions.
>
> Now, onto the high level summary:
>
> Rocky retrospective
> ===
> We began Wednesday morning with a retro on the Rocky cycle and captured
> notes on this etherpad:
>
> https://etherpad.openstack.org/p/nova-rocky-retrospective
>
> The runways review process was seen as overall positive and helped get
> some blueprint implementations merged that had languished in previous
> cycles. We agreed to continue with the runways process as-is in Stein
> and use it for approved blueprints. We did note that we could do better
> at queuing important approved work into runways, such as
> placement-related efforts that were not added to runways last cycle.
>
> We discussed whether or not to move the spec freeze deadline back to
> milestone 1 (we used milestone 2 in Rocky). I have an action item to dig
> into whether or not the late breaking regressions we found at RC time:
>
> https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo
>
> were related to the later spec freeze at milestone 2. The question we
> want to answer is: did a later spec freeze lead to implementations
> landing later and resulting in the late detection of regressions at
> release candidate time?
>
> Finally, we discussed a lot of things around project management,
> end-to-end themes for a cycle, and people generally not feeling they had
> clarity throughout the cycle about which efforts and blueprints were
> most important, aside from runways. We got a lot of work done in Rocky,
> but not as much of it materialized into user-facing features and
> improvements as it did in Queens. Last cycle, we had thought runways
> would capture what is a priority at any given time, but looking back, we
> determined it would be helpful if we still had over-arching
> goals/efforts/features written down for people to refer to throughout
> the cycle. We dove deeper into that discussion on Friday during the hour
> before lunch, where we came up with user-facing themes we aim to
> accomplish in the Stein cycle:
>
> https://etherpad.openstack.org/p/nova-ptg-stein-priorities
>
> Note that these are _not_ meant to preempt anything in runways, these
> are just 1) for my use as a project manager and 2) for everyone's use to
> keep a bigger picture of our goals for the cycle in their heads, to aid
> in their work and review outside of runways.
>
> Themes
> ==
> With that, I'll briefly mention the themes we came up with for the cycle:
>
> * Compute nodes capable to upgrade and exist with nested resource
> providers for multiple GPU types
>
> * Multi-cell operational enhancements: resilience to "down" or
> poor-performing cells and cross-cell instance migration
>
> * Volume-backed user experience and API hardening: ability to specify
> volume type during boot-from-volume, detach/attach of root volume, and
> volume-backed rebuild
>
> These are the user-visible features and functionality we aim to deliver
> and we'll keep tabs on these efforts throughout the cycle to keep them
> making progress.
>
> Placement
> =
> As usual, we had a lot of discussions on placement-related topics, so
> I'll try to highlight the main things that stand out to me. Please see
> the "Placement" section of our PTG etherpad for all the details and
> additional topics we discussed.
>
> We discussed the regression in behavior that happened when we removed
> the Aggregate[Core|Ram|Disk]Filters from the scheduler filters -- these
> filters allowed operators to set overcommit allocation ratios per
> aggregate instead of per host. We agreed on the importance of restoring
> this functionality and hashed out a concrete plan, with two specs needed
> to move forward:
>
> https://review.openstack.org/552105
> https://review.openstack.org/544683
>
> The other standout discussions were around the placement extraction and
> closing the gaps in nested resource providers. For the placement
> extraction, we are focusing on full support of an upgrade from
> integrated placement => extracted placement, including assisting with
> making sure deployment tools like OpenStack-Ansible and TripleO are able
> to support the upgrade. For closing the gaps in nested resource
> providers, there are many parts to it that are 

Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Monty Taylor

On 09/26/2018 04:33 PM, Dean Troyer wrote:

On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann  wrote:

I started documenting the compute API gaps in OSC last release [1]. It's a
big gap and needs a lot of work, even for existing CLIs (the cold/live
migration CLIs in OSC are a mess, and you can't even boot from volume where
nova creates the volume for you). That's also why I put something into the
etherpad about the OSC core team even being able to handle an onslaught of
changes for a goal like this.


The OSC core team is very thin, yes, it seems as though companies
don't like to spend money on client-facing things...I'll be in the
hall following this thread should anyone want to talk...

The migration commands are a mess, mostly because I got them wrong to
start with and we have only tried to patch it up, this is one area I
think we need to wipe clean and fix properly.  Yay! Major version
release!


I thought the same, and we talked about this at the Austin summit, but OSC
is inconsistent about this (you can live migrate a server but you can't
evacuate it - there is no CLI for evacuation). It also came up at the Stein
PTG with Dean in the nova room giving us some direction. [2] I believe the
summary of that discussion was:



a) to deal with the core team sprawl, we could move the compute stuff out of
python-openstackclient and into an osc-compute plugin (like the
osc-placement plugin for the placement service); then we could create a new
core team which would have python-openstackclient-core as a superset


This is not my first choice but is not terrible either...


b) Dean suggested that we close the compute API gaps in the SDK first, but
that could take a long time as well...but it sounded like we could use the
SDK for things that existed in the SDK and use novaclient for things that
didn't yet exist in the SDK


Yup, this can be done in parallel.  The unit of decision for use sdk
vs use XXXclient lib is per-API call.  If the client lib can use an
SDK adapter/session it becomes even better.  I think the priority for
what to address first should be guided by complete gaps in coverage
and the need for microversion-driven changes.


This might be a candidate for one of these multi-release goals that the TC
started talking about at the Stein PTG. I could see something like this
being a goal for Stein:

"Each project owns its own osc- plugin for OSC CLIs"

That deals with the core team and sprawl issue, especially with stevemar
being gone and dtroyer being distracted by shiny x-men bird related things.
That also seems relatively manageable for all projects to do in a single
release. Having a single-release goal of "close all gaps across all service
types" is going to be extremely tough for any older projects that had CLIs
before OSC was created (nova/cinder/glance/keystone). For newer projects,
like placement, it's not a problem because they never created any other CLI
outside of OSC.


I think the major difficulty here is simply how to migrate users from
today state to future state in a reasonable manner.  If we could teach
OSC how to handle the same command being defined in multiple plugins
properly (hello entrypoints!) it could be much simpler as we could
start creating the new plugins and switch as the new command
implementations become available rather than having a hard cutover.

Or maybe the definition of OSC v4 is as above and we just work at it
until complete and cut over at the end.


I think that sounds pretty good, actually. We can also put the 'just get 
the sdk Connection' code in.


You mentioned earlier that python-*client that can take an existing ksa 
Adapter as a constructor parameter make this easier - maybe let's put 
that down as a workitem for this? Becuase if we could do that- then we 
know we've got discovery and config working consistently across the 
board no matter if a call is using sdk or python-*client primitives 
under the cover - so everything will respond to env vars and command 
line options and clouds.yaml consistently.


For that to work, a python-*client Client that took an 
keystoneauth1.adapter.Adapter would need to take it as gospel and not do 
further processing of config, otherwise the point is defeated. But it 
should be straightforward to do in most cases, yeah?



 Note that the current APIs
that are in-repo (Compute, Identity, Image, Network, Object, Volume)
are all implemented using the plugin structure, OSC v4 could start as
the breaking out of those without command changes (except new
migration commands!) and then the plugins all re-write and update at
their own tempo.  Dang, did I just deconstruct my project?


Main difference is making sure these new deconstructed plugin teams 
understand the client support lifecycle - which is that we don't drop 
support for old versions of services in OSC (or SDK). It's a shift from 
the support lifecycle and POV of python-*client, but it's important and 
we just need to all be on the same page.



One thing I don't like 

Re: [openstack-dev] [nova] Stein PTG summary

2018-09-26 Thread Matt Riedemann

On 9/26/2018 5:30 PM, Sylvain Bauza wrote:
So, during this day, we also discussed about NUMA affinity and we said 
that we could possibly use nested resource providers for NUMA cells in 
Stein, but given we don't have yet a specific Placement API query, NUMA 
affinity should still be using the NUMATopologyFilter.
That said, when looking about how to use this filter for vGPUs, it looks 
to me that I'd need to provide a new version for the NUMACell object and 
modify the virt.hardware module. Are we also accepting this (given it's 
a temporary question), or should we need to wait for the Placement API 
support ?


Folks, what are you thoughts ?


I'm pretty sure we've said several times already that modeling NUMA in 
Placement is not something for which we're holding up the extraction.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] subnet pool can not delete prefixes

2018-09-26 Thread wenran xiao
Relation bug: https://bugs.launchpad.net/neutron/+bug/1792901
Any suggestion is welcome!

Cheers,
-wenran
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Last day for TC voting

2018-09-26 Thread Emmet Hikory
We are coming down to the last hours for voting in the TC election.  Voting
ends Sep 27, 2018 23:45 UTC.

Search your gerrit preferred email address [0] for the following subject:
Poll: Stein TC Election

That is your ballot and links you to the voting application. Please vote.
If you have voted, please encourage your colleages to vote.

Candidate statements are linked to the names of all confirmed candidates:
https://governance.openstack.org/election/#stein-tc-candidates

What to do if you don't see the email and have a commit in at least one of the
official programs projects[1]:
* check the trash of your gerrit Preferred Email address[0], in case it went
  into trash or spam
* wait a bit and check again, in case your email server is a bit slow
* find the sha of at least one commit from the program project repos[1] and
  email the election officials [2]

If we can confirm that you are entitled to vote, we will add you to the voters
list and you will be emailed a ballot.

Please vote!  Last time we checked, there were over 1200 unvoted ballots, so
your vote can make a significant difference.

[0] Sign into review.openstack.org: Go to Settings > Contact Information.
Look at the email listed as your Preferred Email.  That is where the
ballot has been sent.
[1] 
https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2018-elections
[2] https://governance.openstack.org/election/#election-officials

-- 
Emmet HIKORY


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-26 Thread Qiming Teng
Hi,

Due to many reasons, I cannot join you on this event, but I do like to
leave some comments here for references.

On Tue, Sep 18, 2018 at 11:27:29AM +0800, Rico Lin wrote:
> *TL;DR*
> *How about a forum in Berlin for discussing autoscaling integration (as a
> long-term goal) in OpenStack?*

First of all, there is nothing called "auto-scaling" in my mind and
"auto" is most of the time a scary word to users. It means the service
or tool is hiding some details from the users when it is doing something
without human intervention. There are cases where this can be useful,
there are also many other cases the service or tool is messing up things
to a state difficult to recover from. What matters most is the usage
scenarios we support. I don't think users care that much how project
teams are organized.
 
> Hi all, as we start to discuss how can we join develop from Heat and Senlin
> as we originally planned when we decided to fork Senlin from Heat long time
> ago.
> 
> IMO the biggest issues we got now are we got users using autoscaling in
> both services, appears there is a lot of duplicated effort, and some great
> enhancement didn't exist in another service.
> As a long-term goal (from the beginning), we should try to join development
> to sync functionality, and move users to use Senlin for autoscaling. So we
> should start to review this goal, or at least we should try to discuss how
> can we help users without break or enforce anything.

The original plan, iirc, was to make sure Senlin resources are supported
in Heat, and we will gradually fade out the existing 'AutoScalingGroup'
and related resource types in Heat. I have no clue since when Heat is
interested in "auto-scaling" again. 

> What will be great if we can build common library cross projects, and use
> that common library in both projects, make sure we have all improvement
> implemented in that library, finally to use Senlin from that from that
> library call in Heat autoscaling group. And in long-term, we gonna let all
> user use more general way instead of multiple ways but generate huge
> confusing for users.

The so called "auto-scaling" is always a solution, built by
orchestrating many moving parts across the infrastructure. In some
cases, you may have to install agents into VMs for workload metering. I
am not convinced this can be done using a library approach.

> *As an action, I propose we have a forum in Berlin and sync up all effort
> from both teams to plan for idea scenario design. The forum submission [1]
> ended at 9/26.*
> Also would benefit from both teams to start to think about how they can
> modulize those functionalities for easier integration in the future.
> 
> From some Heat PTG sessions, we keep bring out ideas on how can we improve
> current solutions for Autoscaling. We should start to talk about will it
> make sense if we combine all group resources into one, and inherit from it
> for other resources (ideally gonna deprecate rest resource types). Like we
> can do Batch create/delete in Resource Group, but not in ASG. We definitely
> got some unsynchronized works inner Heat, and cross Heat and Senlin.

Totally agree with you on this. We should strive to minimize the
technologies users have to master when they have a need.
> Please let me know who is interesting in this idea, so we can work together
> and reach our goal step by step.
> Also please provide though if you got any concerns about this proposal.
> 
> [1] https://www.openstack.org/summit/berlin-2018/call-for-presentations
> -- 
> May The Force of OpenStack Be With You,
> 
> *Rico Lin*irc: ricolin

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] subnet pool can not delete prefixes

2018-09-26 Thread Brian Haley

On 09/26/2018 10:11 PM, wenran xiao wrote:

Relation bug: https://bugs.launchpad.net/neutron/+bug/1792901
Any suggestion is welcome!


Removing a prefix from a subnetpool is not supported, there was an 
inadvertent change to the client that made it seem possible.  We are in 
the process of reverting it:


https://review.openstack.org/#/c/599633/

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Prioritization?

2018-09-26 Thread Jim Rollenhagen
On Wed, Sep 26, 2018 at 8:17 AM Adam Coldrick  wrote:

> On Tue, 2018-09-25 at 18:40 +, CARVER, PAUL wrote:
> [...]
> > There is certainly room for additional means of juggling and
> > discussing/negotiating priorities in the stages before work really gets
> > under way, but if it doesn't eventually become clear
> >
> > 1) who's doing the work
> > 2) when are they targeting completion
> > 3) what (if anything) is higher up on their todo list
>
> Its entirely possible to track these three things in StoryBoard today, and
> for other people to view that information.
>
> 1) Task assignee, though this should be set when someone actually starts
> doing the work rather than being used to indicate "$person intends to do
> this at some point"
> 2) Due date on a card in a board
> 3) Lanes in that board ordered by priority
>

I, for one, would not want to scroll through every lane on a large
project's bug board to find the priority or target date for a given bug.
For example, Nova has 819 open bugs right now.

It would be a much better user experience to be able to open a specific bug
and see the priority or target date.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] ceph osd deploy fails

2018-09-26 Thread Florian Engelmann

Hi,

I tried to deploy Rocky in a multinode setup but ceph-osd fails with:


failed: [xxx-poc2] (item=[0, {u'fs_uuid': u'', u'bs_wal_label': 
u'', u'external_journal': False, u'bs_blk_label': u'', 
u'bs_db_partition_num': u'', u'journal_device': u'', u'journal': u'', 
u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'', 
u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'', 
u'partition_num': u'1', u'bs_db_label': u'', u'bs_blk_partition_num': 
u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'', 
u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS', u'bs_blk_device': 
u''}]) => {

"changed": true,
"item": [
0,
{
"bs_blk_device": "",
"bs_blk_label": "",
"bs_blk_partition_num": "",
"bs_db_device": "",
"bs_db_label": "",
"bs_db_partition_num": "",
"bs_wal_device": "",
"bs_wal_label": "",
"bs_wal_partition_num": "",
"device": "/dev/nvme0n1",
"external_journal": false,
"fs_label": "",
"fs_uuid": "",
"journal": "",
"journal_device": "",
"journal_num": 0,
"partition": "/dev/nvme0n1",
"partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",
"partition_num": "1"
}
]
}

MSG:

Container exited with non-zero return code 2

We tried to debug the error message by starting the container with a 
modified endpoint but we are stuck at the following point right now:



docker run  -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e 
"JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e 
"KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e 
"KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e 
"OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e 
"OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e "OSD_BS_DEV=/dev/nvme0n1" 
-e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1" -e 
"OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e 
"OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e "OSD_INITIAL_WEIGHT=1" 
-e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e 
"OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false"   -v 
"/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v 
"/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v 
"kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint 
/bin/bash 
10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3




cat /var/lib/kolla/config_files/ceph.client.admin.keyring > 
/etc/ceph/ceph.client.admin.keyring



cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf


(bootstrap-osd-0)[root@985e2dee22bc /]# /usr/bin/ceph-osd -d 
--public-addr 10.0.153.11 --cluster-addr 10.0.153.11

usage: ceph-osd -i  [flags]
  --osd-data PATH data directory
  --osd-journal PATH
journal file or block device
  --mkfscreate a [new] data directory
  --mkkey   generate a new secret key. This is normally used in 
combination with --mkfs

  --convert-filestore
run any pending upgrade operations
  --flush-journal   flush all data out of journal
  --mkjournal   initialize a new journal
  --check-wants-journal
check whether a journal is desired
  --check-allows-journal
check whether a journal is allowed
  --check-needs-journal
check whether a journal is required
  --debug_osdset debug level (e.g. 10)
  --get-device-fsid PATH
get OSD fsid for the given block device

  --conf/-c FILEread configuration from the given configuration file
  --id/-i IDset ID portion of my name
  --name/-n TYPE.ID set name
  --cluster NAMEset cluster name (default: ceph)
  --setuser USERset uid to user or uid (and gid to user's gid)
  --setgroup GROUP  set gid to group or gid
  --version show version and quit

  -drun in foreground, log to stderr.
  -frun in foreground, log to usual location.
  --debug_ms N  set message debug level (e.g. 1)
2018-09-26 12:28:07.801066 7fbda64b4e40  0 ceph version 12.2.4 
(52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process 
(unknown), pid 46
2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #' where # 
is the osd number



But it looks like "-i" is not set anywere?

grep command 
/opt/stack/kolla-ansible/ansible/roles/ceph/templates/ceph-osd.json.j2
"command": "/usr/bin/ceph-osd -f --public-addr {{ 
hostvars[inventory_hostname]['ansible_' + 
storage_interface]['ipv4']['address'] }} --cluster-addr {{ 
hostvars[inventory_hostname]['ansible_' + 
cluster_interface]['ipv4']['address'] }}",


What's wrong with our setup?

All the best,
Flo


--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: 

Re: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core

2018-09-26 Thread Marcin Juszkiewicz
+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-26 Thread vishakha agarwal
> From : Colleen Murphy 
> To : 
> Date : Tue, 25 Sep 2018 18:33:30 +0900
> Subject : Re: [openstack-dev] [keystone] Domain-namespaced user attributes in 
> SAML assertions from Keystone IdPs
>  Forwarded message 
>  > On Mon, Sep 24, 2018, at 8:40 PM, John Dennis wrote:
>  > > On 9/24/18 8:00 AM, Colleen Murphy wrote:
>  > > > This is in regard to https://launchpad.net/bugs/1641625 and the 
> proposed patch https://review.openstack.org/588211 for it. Thanks Vishakha 
> for getting the ball rolling.
>  > > >
>  > > > tl;dr: Keystone as an IdP should support sending 
> non-strings/lists-of-strings as user attribute values, specifically lists of 
> keystone groups, here's how that might happen.
>  > > >
>  > > > Problem statement:
>  > > >
>  > > > When keystone is set up as a service provider with an external 
> non-keystone identity provider, it is common to configure the mapping rules 
> to accept a list of group names from the IdP and map them to some property of 
> a local keystone user, usually also a keystone group name. When keystone acts 
> as the IdP, it's not currently possible to send a group name as a user 
> property in the assertion. There are a few problems:
>  > > >
>  > > >  1. We haven't added any openstack_groups key in the creation of 
> the SAML assertion 
> (http://git.openstack.org/cgit/openstack/keystone/tree/keystone/federation/idp.py?h=14.0.0#n164).
>  > > >  2. If we did, this would not be enough. Unlike other IdPs, in 
> keystone there can be multiple groups with the same name, namespaced by 
> domain. So it's not enough for the SAML AttributeStatement to contain a 
> semi-colon-separated list of group names, since a user could theoretically be 
> a member of two or more groups with the same name.
>  > > > * Why can't we just send group IDs, which are unique? Because two 
> different keystones are not going to have independent groups with the same 
> UUID, so we cannot possibly map an ID of a group from keystone A to the ID of 
> a different group in keystone B. We could map the ID of the group in in A to 
> the name of a group in B but then operators need to create groups with UUIDs 
> as names which is a little awkward for both the operator and the user who now 
> is a member of groups with nondescriptive names.
>  > > >  3. If we then were able to encode a complex type like a group 
> dict in a SAML assertion, we'd have to deal with it on the service provider 
> side by being able to parse such an environment variable from the Apache 
> headers.
>  > > >  4. The current mapping rules engine uses basic python string 
> formatting to translate remote key-value pairs to local rules. We would need 
> to change the mapping API to work with values more complex than strings and 
> lists of strings.
>  > > >
>  > > > Possible solution:
>  > > >
>  > > > Vishakha's patch (https://review.openstack.org/588211) starts to solve 
> (1) but it doesn't go far enough to solve (2-4). What we talked about at the 
> PTG was:
>  > > >
>  > > >  2. Encode the group+domain as a string, for example by using the 
> dict string repr or a string representation of some custom XML and maybe 
> base64 encoding it.
>  > > >  * It's not totally clear whether the AttributeValue class of 
> the pysaml2 library supports any data types outside of the xmlns:xs namespace 
> or whether nested XML is an option, so encoding the whole thing as an 
> xs:string seems like the simplest solution.
>  > > >  3. The SP will have to be aware that openstack_groups is a 
> special key that needs the encoding reversed.
>  > > >  * I wrote down "MultiDict" in my notes but I don't recall 
> exactly what format the environment variable would take that would make a 
> MultiDict make sense here, in any case I think encoding the whole thing as a 
> string eliminates the need for this.
>  > > >  4. We didn't talk about the mapping API, but here's what I think. 
> If we were just talking about group names, the mapping API today would work 
> like this (slight oversimplification for brevity):
>  > > >
>  > > > Given a list of openstack_groups like ["A", "B", "C"], it would work 
> like this:
>  > > >
>  > > > [
>  > > >{
>  > > >  "local":
>  > > >  [
>  > > >{
>  > > >  "group":
>  > > >  {
>  > > >"name": "{0}",
>  > > >"domain":
>  > > >{
>  > > >  "name": "federated_domain"
>  > > >}
>  > > >  }
>  > > >}
>  > > >  ], "remote":
>  > > >  [
>  > > >{
>  > > >  "type": "openstack_groups"
>  > > >}
>  > > >  ]
>  > > >}
>  > > > ]
>  > > > (paste in case the spacing makes this unreadable: 
> http://paste.openstack.org/show/730623/ )
>  > > >
>  > > > But now, we no longer have a list of strings but something more like 
> [{"name": "A", "domain_name": "Default"} {"name": "B", "domain_name": 
> "Default", "name": "A", 

[openstack-dev] [cyborg]zero tolerance policy on padding activities

2018-09-26 Thread Zhipeng Huang
Hi all,

I want to emphasize the zero tolerance policy in cyborg project regarding
padding activities. If you look at the gerrit record [0] you would probably
got the idea: out of the 15 abandoned patches, only the ones from jiapei
and shaohe were actually meant to do real fixing.

We have #openstack-cyborg irc channel, community mailinglist as well as
individual core member email you could reach out to. We also have setup
wechat group for Chinese developers where the atmosphere is welcoming and
funny gifs flies around all the time. There are more than enough measures
to help you actually get involved with the project.

Do the right thing.

[0]
https://review.openstack.org/#/q/status:abandoned+project:openstack/cyborg+label:Code-Review%253D-2


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Colleen Murphy
Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:



> 
> We agreed to prefer federation for Keystone and came up with two work 
> items to cover missing functionality:
> 
> * Keystone to trust a token from an ID Provider master and when the auth 
> method is called, perform an idempotent creation of the user, project 
> and role assignments according to the assertions made in the token

This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.

> * Keystone should support the creation of users and projects with 
> predictable UUIDs (eg.: hash of the name of the users and projects). 
> This greatly simplifies Image federation and telemetry gathering

I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?

Were there any volunteers to help write up specs and work on the 
implementations in keystone?



Colleen (cmurphy)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Chason Chan (chason) as kolla-ansible core

2018-09-26 Thread Jeffrey Zhang
+1
good job

On Wed, Sep 26, 2018 at 9:30 AM zhubingbing 
wrote:

> +1
>
>
>
>
>
> At 2018-09-25 23:47:10, Eduardo Gonzalez  wrote:
>
> Hi,
>
> I would like to propose Chason Chan to the kolla-ansible core team.
>
> Chason is been working on addition of Vitrage roles, rework VpnaaS
> service, maintaining
> documentation as well as fixing many bugs.
>
> Voting will be open for 14 days (until 9th of Oct).
>
> Kolla-ansible cores, please leave a vote.
> Consider this mail my +1 vote
>
> Regards,
> Eduardo
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storyboard] why use different "bug" tags per project?

2018-09-26 Thread Chris Friesen

Hi,

At the PTG, it was suggested that each project should tag their bugs 
with "-bug" to avoid tags being "leaked" across projects, or 
something like that.


Could someone elaborate on why this was recommended?  It seems to me 
that it'd be better for all projects to just use the "bug" tag for 
consistency.


If you want to get all bugs in a specific project it would be pretty 
easy to search for stories with a tag of "bug" and a project of "X".


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] weekly meeting

2018-09-26 Thread Чадин Александр Сергеевич
Greetings,

We’ll have meeting today at 8:00 UTC on #openstack-meeting-3 channel.

Best Regards,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Prioritization?

2018-09-26 Thread Adam Coldrick
On Tue, 2018-09-25 at 13:41 -0400, Doug Hellmann wrote:
> Adam Coldrick  writes:
> > For tasks I am less concerned in that aspect since cross-project
> > support
> > isn't hurt, but remain of the opinion that a global field is the wrong
> > approach since it means that only one person (or group of people) gets
> > to
> > visibly express their opinion on the priority of the task.
> 
> While I agree that not everyone attaches the same priority to a given
> task, and it's important for everyone to be able to have their own say
> in the relative importance of tasks/stories, I think it's more important
> than you're crediting for downstream consumers to have a consistent way
> to understand the priority attached by the person(s) doing the
> implementation work.

I think you're right. The existing implementation hasn't really considered
the case of a downstream consumer unused to the differences in StoryBoard
just turning up at the task tracker to find out something like "what are
the priorities of the oslo team?", and it shows in how undiscoverable to
an outsider that is no matter which of the workflows we suggest is being
used.

> > Allowing multiple groups to express opinions on the priority of the
> > same
> > tasks allows situations where (to use a real world example I saw
> > recently,
> > but not in OpenStack) an upstream project marks a bug as medium
> > priority
> > for whatever reason, but a downstream user of that project is
> > completely
> > broken by that bug, meaning either providing a fix to it or persuading
> > someone else to is of critical importance to them.
> 
> This example is excellent, and I think it supports my position.

I can see that, but I also don't think our positions are entirely
incompatible. In the example, the downstream user was one of the main
users of the upstream project and there was some contributor overlap.

In my ideal world the downstream project would've expressed the priority
in a worklist or board that upstream people were subscribed (or otherwise
paying attention) to. Then, the upstream project would've set their
priority in a board or worklist which the downstream folk also pay
attention to somehow (since they are interested in how upstream are
prioritising work). This way a contributor interested in the priorities of
both projects could see the overlap, and perhaps use that to decide what
to work on next. Also, since downstream have a way to pay attention to
upstream's priority, they can see the low "official" priority and go and
have any discussions needed.

> An important area where using boards or worklists falls short of my own
> needs is that, as far as I know, it is not possible to subscribe to
> notifications for when a story or task is added to a list or board. So
> as a person who submits a story, I have no way of knowing when the
> team(s) working on it add it to (or remove it from) a priority list or
> change its priority by moving it to a different lane in a board.
> Communicating about what we're doing is as important as gathering and
> tracking the list of tasks in the first place. Without a notification
> that the priority of a story or task has been lowered, how would I know
> that I need to go try to persuade the team responsible to raise it back
> up?

It is indeed not possible for the scenario I describe above to work neatly
in StoryBoard today, because boards and worklists don't give
notifications. That's because we've not got round to finishing that part
of the implementation yet, rather than by design.

Worklists do currently generate history of changes made to them, there is
just no good way to see it anywhere and no notifications sent based on it.

> Even if we add (or there is) some way for me to receive a notification
> based on board or list membership, without a real priority field we have
> several different ways to express priority (different tag names, a
> single worklist that's kept in order, a board with separate columns for
> each status, etc.). That means each team could potentially use a
> different way, which in turn means downstream consumers have to
> discover, understand, and subscribe to all of those various ways, and
> use them correctly, for every team they are tracking. I think that's an
> unreasonable burden to place on someone who is not working in the
> community constantly, as is the case for many of our operators who
> report bugs.

This is the part I've not really considered before for StoryBoard. Perhaps
we should've been defining an "official" prioritisation workflow and
trying to make that discoverable.

> > With global priority there is a trade-off, either the bug tracker
> > displays
> > priorities with no reference as to who they are important to,
> > downstream
> > duplicate the issue elsewhere to track their priority, or their
> > expression
> > of how important the issue is is lost in a comment in order to
> > maintain
> > the state of "all priorities are determined by the core team".
> 
> I suppose that 

[openstack-dev] [taas] rocky

2018-09-26 Thread Takashi Yamamoto
hi,

it seems we forgot to create rocky branch.
i'll make a release and the branch sooner or later, unless someone
beat me to do so.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev