Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-22 Thread Juan Antonio Osorio
On Tue, May 23, 2017 at 8:23 AM, Rabi Mishra  wrote:

> Hi All,
>
> As per the updated community goal[1]  for api deployment with wsgi, we've
> to transition to use uwsgi rather than mod_wsgi at the gate. It also seems
> mod_wsgi support would be removed from devstack in Queens.
>
What do you mean support for mod_wsgi will be removed from devstack in
Queens? other projects have been using mod_wsgi and we've been deploying
several services (even Heat) in TripleO.

>
> I've been working on a patch[2] for the transition and encountered a few
> issues as below.
>
> 1. We encode stack_indentifer( along with the path
> separator in heatclient. So, requests with encoded path separators are
> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
> directive in the site/vhost config[3].
>
That's correct. You might want to refer to the configuration we use in
puppet/TripleO. We got it working with that :).
https://github.com/openstack/puppet-heat/blob/master/manifests/wsgi/apache.pp#L111-L137

>
> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
> ubuntu.  From my testing It seems, it has to be set in 000-default.conf for
> ubuntu.
>
> Rather than messing with the devstack plugin code, I went ahead proposed a
> change to not encode the path separators in heatclient[5] ( Anyway they
> would be decoded by apache with the directive 'AllowEncodedSlashes On'
> before it's consumed by the service) which seem to have fixed those 404s.
>
> Is there a generic way to set the above directive (when using
> apache+mod_proxy_uwsgi) in the devstack plugin?
>
> 2.  With the above, most of the tests seem to work fine other than the
> ones using waitcondition, where we signal back from the vm to the api
> services. I could see " curl: (7) Failed to connect to 10.0.1.78 port 80:
> No route to host" in the vm console logs[6].
>
> It could connect to heat api services using ports 8004/8000 without this
> patch, but not sure why not port 80? I tried testing this locally and
> didn't see the issue though.
>
> Is this due to some infra settings or something else?
>
>
> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
>
> [2] https://review.openstack.org/#/c/462216/
>
> [3]  https://github.com/openstack/heat/blob/master/devstack/
> files/apache-heat-api.template#L9
>
> [4] http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-
> functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz
>
> [5] https://review.openstack.org/#/c/463510/
>
> [6] http://logs.openstack.org/16/462216/11/check/gate-heat-
> dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/e7d9e90/console.html#_2017-05-20_07_04_30_718021
>
>
> --
> Regards,
> Rabi Mishra
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Running tests

2017-05-22 Thread Andres Alvarez
Hello everyone

I am having a hard time in understanding the correct way to run the tests
in Gnocchi. I have already read about tox and testr, but it seems I still
can't get to run the tests.

Would really appreciate if someone could explain the steps necessary to get
all tests running.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-22 Thread Rabi Mishra
Hi All,

As per the updated community goal[1]  for api deployment with wsgi, we've
to transition to use uwsgi rather than mod_wsgi at the gate. It also seems
mod_wsgi support would be removed from devstack in Queens.

I've been working on a patch[2] for the transition and encountered a few
issues as below.

1. We encode stack_indentifer( along with the path
separator in heatclient. So, requests with encoded path separators are
dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
directive in the site/vhost config[3].

Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
ubuntu.  From my testing It seems, it has to be set in 000-default.conf for
ubuntu.

Rather than messing with the devstack plugin code, I went ahead proposed a
change to not encode the path separators in heatclient[5] ( Anyway they
would be decoded by apache with the directive 'AllowEncodedSlashes On'
before it's consumed by the service) which seem to have fixed those 404s.

Is there a generic way to set the above directive (when using
apache+mod_proxy_uwsgi) in the devstack plugin?

2.  With the above, most of the tests seem to work fine other than the ones
using waitcondition, where we signal back from the vm to the api services.
I could see " curl: (7) Failed to connect to 10.0.1.78 port 80: No route to
host" in the vm console logs[6].

It could connect to heat api services using ports 8004/8000 without this
patch, but not sure why not port 80? I tried testing this locally and
didn't see the issue though.

Is this due to some infra settings or something else?


[1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html

[2] https://review.openstack.org/#/c/462216/

[3]
https://github.com/openstack/heat/blob/master/devstack/files/apache-heat-api.template#L9

[4]
http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz

[5] https://review.openstack.org/#/c/463510/

[6]
http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/e7d9e90/console.html#_2017-05-20_07_04_30_718021


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] No RefStack IRC meeting on May 23, 2017

2017-05-22 Thread Catherine Cuong Diep


Hi Everyone,

There will be no RefStack IRC meeting on May 23, 2017.  We will resume
meeting on May 30, 2017.

Catherine Diep

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-22 Thread Kevin Benton
I think we just need someone to volunteer to do the work to expose it as
metadata to the VM in Nova.

On May 22, 2017 1:27 PM, "Robert Li (baoli)"  wrote:

> Hi Levi,
>
>
>
> Thanks for the info. I noticed that support in the nova code, but was
> wondering why something similar is not available for vlan trunking.
>
>
>
> --Robert
>
>
>
>
>
> On 5/22/17, 3:34 PM, "Moshe Levi"  wrote:
>
>
>
> Hi Robert,
>
> The closes thing that I know about is tagging of SR-IOV physical
> function’s VLAN tag to guests see [1]
>
> Maybe you can leverage the same mechanism to config vlan trunking in guest.
>
>
>
> [1] - https://specs.openstack.org/openstack/nova-specs/specs/
> ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html
>
>
>
>
>
> *From:* Robert Li (baoli) [mailto:ba...@cisco.com]
> *Sent:* Monday, May 22, 2017 8:49 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [nova][vlan trunking] Guest networking
> configuration for vlan trunk
>
>
>
> Hi,
>
>
>
> I’m trying to find out if there is support in nova (in terms of metadata
> and cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage
> example’ provided in this wiki https://wiki.openstack.org/
> wiki/Neutron/TrunkPort, it indicates:
>
>
>
> # The typical cloud image will auto-configure the first NIC (eg. eth0)
> only and not the vlan interfaces (eg. eth0.VLAN-ID).
>
> ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101
>
>
>
> I’d like to understand why the support of configuring vlan interfaces in
> the guest is not added. And should it be added?
>
>
>
> Thanks,
>
> Robert
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] uWSGI help for Congress

2017-05-22 Thread gordon chung


On 22/05/17 05:48 PM, Eric K wrote:
> If someone out there knows uWSGI and has a couple spare cycles to help
> Congress project, we'd super appreciate it.
>
> The regular contributors to Congress don't have experience with uWSGI
> and could definitely use some help getting started with this goal.
> Thanks a ton!
>

it shouldn't be much different from mod_wsgi. you just need to create a 
uwsgi.ini file which points to the appropriate .wsgi file. here's 
sileht's patch in gnocchi from a while back: 
https://review.openstack.org/#/c/292077. apparently pbr provides wsgi 
file now (not sure what version though): 
https://github.com/gnocchixyz/gnocchi/commit/6377e25bdcca68be66fadf65aa16a6f174cfaa99

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-22 Thread Matt Riedemann

Just wanted to point out that someone else requested this again today:

https://review.openstack.org/#/c/466595/

30 seconds going through launchpad for old blueprints found at least 4 
others:


https://blueprints.launchpad.net/nova/+spec/vol-type-with-blank-vol

https://blueprints.launchpad.net/nova/+spec/volume-support-for-multi-hypervisors

https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type

https://blueprints.launchpad.net/nova/+spec/ec2-volume-type

And I know cburgess and garyk at least had one each of their own.

Is this really something we are going to have to deny at least once per 
release? My God how is it that this is the #1 thing everyone for all 
time has always wanted Nova to do for them?


I'm honestly starting to get concerned.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Room during the next PTG

2017-05-22 Thread gordon chung


On 22/05/17 02:42 AM, Hanxi Liu wrote:
>
> +1. I always thought it's a pity that we have no weekly meeting. The
> whole Telemetry team need communications.
> PTG room discussion can not only provide a good chance to communicate
> within team but also attract more new people to contribute.
> In my opinion, Discussion promotes the growth of the project. Maybe
> statistics on the number of people who can attend the meeting is
> more convincing.
>

i just want to add, i am ok to meet at PTG but if we consider the turn 
out at forum sessions, we aren't getting any more developers, just 
random requirement requests.

we can discuss on irc and mailing list? in fact we encourage it, which 
is why you might see random short emails from me on list about whether i 
should proceed on work item a certain way. i understand how meetings and 
presence create this false narrative that work is being done. 
regardless, i do hope we leverage ML and irc more. if you have a comment 
or question, don't be afraid to use this medium. i shouldn't have to 
wait 6 months to here problems :)

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-22 Thread Matt Riedemann

On 5/22/2017 10:58 AM, Sean Dague wrote:

I think these are actually compatible concerns. The current proposal to
me actually tries to address A1 & B1, with a hint about why A2 is
valuable and we would want to do that.

It feels like there would be a valuable follow on in which A2 & B2 were
addressed which is basically "progressive enhancements can be allowed to
only work with MySQL based backends". Which is the bit that Monty has
been pushing for in other threads.

This feels like what a Tier 2 support looks like. A basic SQLA and pray
so that if you live behind SQLA you are probably fine (though not
tested), and then test and advanced feature roll out on a single
platform. Any of that work might port to other platforms over time, but
we don't want to make that table stakes for enhancements.


I think this is reasonable and is what I've been hoping for as a result 
of the feedback on this.


I think it's totally fine to say tier 1 backends get shiny new features. 
I mean, hell, compare the libvirt driver in nova to all other virt 
drivers in nova. New features are written for the libvirt driver and we 
have to strong-arm them into other drivers for a compatibility story.


I think we should turn on postgresql as a backend in one of the CI jobs, 
as I've noted in the governance change - it could be the nova-next 
non-voting job which only runs on nova, but we should have something 
testing this as long as it's around, especially given how easy it is to 
turn this on in upstream CI (it's flipping a devstack variable).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Forum] Boston recap: skip-level upgrading session

2017-05-22 Thread Shintaro Mizuno

Hi all,

I want to recap the Boston Forum session "Skip-level upgrading - jumping 
ahead to catch up" which I and Andy co-moderated.


We had a good number of attendees from both operators and developers in 
the room and had positive discussion.
We agreed to bring this topic on dev ML for further discussion, so all 
feedback, comment are appreciated.


Here are some of the points from the session.
See etherpad [1] for details.

- Most of the ops in the room who did OpenStack upgrade did N-m to N 
(where m>=2, N:latest release)
- Variations of skip-level upgrade experiences was shared and the 
requirements for community support of skip-level upgrade was pointed out.
- There were some level of consensus among the Ops in the room that API 
downtime during the maintenance window is acceptable. Having instances 
running during the upgrade is critical for operators.
- There were proposal from Dev that they can try to have better 
description on N-2 to N upgrade impact in release notes (per project).

- All agreed to bring this proposal to dev-ML for further discussion.

[1] https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading

Regards,
Shintaro

--
Shintaro MIZUNO
NTT Software Innovation Center
TEL: 0422-59-4977
E-mail: mizuno.shint...@lab.ntt.co.jp
shintaro1...@gmail.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-22 Thread Jay Pipes

On 05/22/2017 12:01 PM, Zane Bitter wrote:

On 19/05/17 17:59, Matt Riedemann wrote:

I'm not really sure what you're referring to here with 'update' and [1].
Can you expand on that? I know it's a bit of a tangent.


If the user does a stack update that changes the network from 'auto' to 
'none', or vice-versa.


Detour here, apologies...

Why would it matter whether a user changes a stack definition for some 
resource from auto-created network to none? Why would you want 
*anything* to change about instances that had already been created by 
Heat with the previous version of the stack definition?


In other words, why shouldn't the change to the stack simply affect 
*new* resources that the stack might create? After all, get-me-a-network 
is intended for instance *creation* and nothing else...


Why not treat already-provisioned resources of a stack as immutable once 
provisioned? That is, after all, one of the primary benefits of a "cloud 
native application" -- immutability of application images once deployed 
and the clean separation of configuration from data.


This is one of the reasons that the (application) container world has it 
easy with regards to resource management. If you need to change the 
sizing of a deployment [1], Kubernetes doesn't need to go through all 
the hoops we do in resize/migrate/live-migrate. They just blow away one 
or more of the application container replicas [2] and start up new ones. 
[3] Of course, this doesn't work out so well with stateful applications 
(aka the good ol' Nova VM), which is why there's a whole slew of 
constraints on the automatic orchestration potential of StatefulSets in 
Kubernetes [4], constraints that (surprise!) map pretty much one-to-one 
with all the Heat resource dependency management bugs that you 
highlighted in a previous ML response (network identifier is static and 
must follow a pre-described pattern, storage for all pods in the 
StatefulSet must be a PersistentVolume, updating a StatefulSet is 
currently a manual process, etc).


Best,
-jay

[1] A deployment in the Kubernetes sense of the term, ala
https://kubernetes.io/docs/concepts/workloads/controllers/deployment

[2] 
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replicaset/replica_set.go#L508


[3] In fact, changing the size/scale of a deployment *does not* 
automatically trigger any action in Kubernetes. Only changes to the 
configuration of the deployment's containers (.spec.template) will 
automatically trigger some action being taken.


[4] 
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Issues with reno

2017-05-22 Thread Matt Riedemann
I think Doug and I have talked about this before, but it came up again 
tonight.


There seems to be an issue where release notes for the current series 
don't show up in the published release notes, but unreleased things do.


For example, the python-novaclient release notes:

https://docs.openstack.org/releasenotes/python-novaclient/

Contain Ocata series release notes and the currently unreleased set of 
changes for Pike, but doesn't include the 8.0.0 release notes, which is 
important for projects impacted by things we removed in the 8.0.0 
release (lots of deprecated proxy APIs and CLIs were removed).


I've noticed the same for things in Nova's release notes where 
everything between ocata and the p-1 tag is missing.


Is there already a bug for this?

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Tacker] Not able to run user_data commands on my instance

2017-05-22 Thread yanxingan

Hi, Vishnu,

  Openwrt mgmt driver is different with user_data shell cmd.

  User_data shell cmds are injected into vm instance during vm's boot 
stage (cloud-init script), via metadata service. Cloud-init script is 
required in image.


  While mgmt_driver is used to configure vm after booting success. 
Openwrt mgmt driver using ssh to configure vm, which is only used for 
openwrt image, can not used for cirros or ubuntu image.


  User_data can be used for any images, if the image has cloud-init script.

To locate this issue, you can execute this cmd in vm:
$ curl http://169.254.169.254/
And check cloud-init is installed in vm.


On 2017/5/23 8:36, Sridhar Ramaswamy wrote:

Hi Vishnu,

Just to rule out any underlying metadata service issue, can you verify 
if a simple heat stack with user_data [1] works fine first? Also, the 
actual TOSCA -> HOT translated template will be available in tacker.log. 
Try creating a heat stack using that HOT template and make sure the 
intended user_data cmds gets executed..


HTH,
Sridhar

[1] 
https://docs.openstack.org/developer/heat/template_guide/software_deployment.html#user-data-boot-scripts-and-cloud-init



On Mon, May 22, 2017 at 5:08 AM, Vishnu Pajjuri 
> wrote:


  Hi,

I'm have installed openstack with tacker by devstack.

I'm able to run OpenWRT vnf and able to configure the firewall
service with openwrt management driver.

And also able to run shell commands in cirros image which is also
using openwrt management driver.


Now I have created one ubuntu image, and able to launch through tacker.

In this instance i want run some shell commands through tacker's
user_data feature.

But no commands are executing.

Is it possible to run commands on custom images unlike cirros/openwrt?

If yes kindly share the procedure to create proper ubuntu image.



Below is tosca configd file


 tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

 description: Demo with user-data

 metadata:
   template_name: sample-vnfd-userdata

 topology_template:
   node_templates:
 VDU1:
   type: tosca.nodes.nfv.VDU.Tacker
   capabilities:
 nfv_compute:
   properties:
 num_cpus: 1
 mem_size: 1024 MB
 disk_size: 1 GB
   properties:
 image: ubuntu-image
 config: |
   param0: key1
   param1: key2
 mgmt_driver: openwrt
 config_drive: true
 user_data_format: RAW
 user_data: |
   #!/bin/sh
   echo "my hostname is `hostname`" > /tmp/hostname
   date > /tmp/date
   ifconfig > /tmp/ifconfig
   df -h > /tmp/diskinfo
 CP1:
   type: tosca.nodes.nfv.CP.Tacker
   properties:
 management: true
 order: 0
 anti_spoofing_protection: false
   requirements:
 - virtualLink:
 node: VL1
 - virtualBinding:
 node: VDU1

  VL1:
   type: tosca.nodes.nfv.VL
   properties:
 network_name: net_mgmt
 vendor: ACME

Regards,
-Vishnu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-22 Thread Jay Pipes

On 05/21/2017 03:56 PM, Monty Taylor wrote:

On 05/19/2017 05:10 PM, Matt Riedemann wrote:

On 5/19/2017 3:35 PM, Monty Taylor wrote:

Heck - while I'm on floating ips ... if you have some pre-existing
floating ips and you want to boot servers on them and you want to do
that in parallel, you can't. You can boot a server with a floating ip
that did not pre-exist if you get the port id of the fixed ip of the
server then pass that id to the floating ip create call. Of course,
the server doesn't return the port id in the server record, so at the
very least you need to make a GET /ports.json?device_id={server_id}
call. Of course what you REALLY need to find is the port_id of the ip
of the server that came from a subnet that has 'gateway_ip' defined,
which is even more fun since ips are associated with _networks_ on the
server record and not with subnets.


A few weeks ago I think we went down this rabbit hole in the nova
channel, which led to this etherpad:

https://etherpad.openstack.org/p/nova-os-interfaces

It was really a discussion about the weird APIs that nova has and a lot
of the time our first question is, "why does it return this, or that, or
how is this consumed even?", at which point we put out the Monty signal.


That was a fun conversation!


During a seemingly unrelated forum session on integrating searchlight
with nova-api, operators in the room were saying they wanted to see
ports returned in the server response body, which I think Monty was also
saying when we were going through that etherpad above.


I'd honestly like the contents you get from os-interfaces just always be 
returned as part of the server record. Having it as a second REST call 
isn't terribly helpful - if I need to make an additional call per 
server, I might as well just go call neutron. That way the only 
per-server query I really need to make is GET 
/ports.json?device_id={server_id} - since networks and subnets can be 
cached.


However, if I could do GET /servers/really-detailed or something and get 
/servers/detail + /os-interfaces in one go for all of the servers in my 
project, that would be an efficiency win.


It seems you're asking us really to get rid of REST and implement a 
GraphQL API.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Forum] Boston forum etherpad ##hashtag

2017-05-22 Thread Sun, Yih Leong
Hi all,

In response to the previous recommendation on using the ##hashtag [1,2,3,4] at 
OpenStack Forum, the initial data is ready and is tentatively published at this 
repo [5]. 

In the repo [5], you can see a list of ##hashtag that were used at Boston Forum 
sessions, with a link to the relevant etherpad url and line number. 
These ##hashtag data were generated based on the etherpad links listed at the 
Forum wiki page [6]. 

Although not every moderator/forum-sessions participated, we did see useful 
tags being used (##newfeature, ##gap, ##painpoint, ##best-practice, etc)  in 
various discussion such as Public Cloud, NFV, etc., and a few projects were 
called out (##nova, ##ironic, ##keystone, ##gnocchi, etc).

This is the very first time we introduce ##hashtag mechanism at the Forum and 
with a "MVP" program to aggregate the data. 
We hope and plan to do more analysis and generate useful information. 
All feedback/questions/comment in any kind are very welcome. :-)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-April/115972.html
[2] http://lists.openstack.org/pipermail/user-committee/2017-April/001994.html
[3] https://etherpad.openstack.org/p/BOS-forum-moderator-template
[4] https://etherpad.openstack.org/p/BOS-forum-hashtag-definition
[5] 
https://github.com/openstack/development-proposals/tree/master/forum/201705-bos
[6] https://wiki.openstack.org/wiki/Forum/Boston2017

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Stepping Down

2017-05-22 Thread Chandrasekar, Dharini
Hello Glancers,

Due to a change in my job role with my employer, I unfortunately do not have 
the bandwidth to contribute to Glance in the capacity of a Core Contributor.
I hence would have to step down from my role of a Core Contibutor in Glance.

I had a great experience working in OpenStack Glance. Thank you all for your 
help and support. I wish you all, good luck in all your endeavors.

Thanks,
Dharini.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Tacker] Not able to run user_data commands on my instance

2017-05-22 Thread Sridhar Ramaswamy
Hi Vishnu,

Just to rule out any underlying metadata service issue, can you verify if a
simple heat stack with user_data [1] works fine first? Also, the actual
TOSCA -> HOT translated template will be available in tacker.log. Try
creating a heat stack using that HOT template and make sure the intended
user_data cmds gets executed..

HTH,
Sridhar

[1]
https://docs.openstack.org/developer/heat/template_guide/software_deployment.html#user-data-boot-scripts-and-cloud-init


On Mon, May 22, 2017 at 5:08 AM, Vishnu Pajjuri 
wrote:

>  Hi,
>
>I'm have installed openstack with tacker by devstack.
>
> I'm able to run OpenWRT vnf and able to configure the firewall service
> with openwrt management driver.
>
> And also able to run shell commands in cirros image which is also using
> openwrt management driver.
>
>
> Now I have created one ubuntu image, and able to launch through tacker.
>
> In this instance i want run some shell commands through tacker's user_data
> feature.
>
> But no commands are executing.
>
> Is it possible to run commands on custom images unlike cirros/openwrt?
>
> If yes kindly share the procedure to create proper ubuntu image.
>
>
>
> Below is tosca configd file
>
>
> tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
>
> description: Demo with user-data
>
> metadata:
>   template_name: sample-vnfd-userdata
>
> topology_template:
>   node_templates:
> VDU1:
>   type: tosca.nodes.nfv.VDU.Tacker
>   capabilities:
> nfv_compute:
>   properties:
> num_cpus: 1
> mem_size: 1024 MB
> disk_size: 1 GB
>   properties:
> image: ubuntu-image
> config: |
>   param0: key1
>   param1: key2
> mgmt_driver: openwrt
> config_drive: true
> user_data_format: RAW
> user_data: |
>   #!/bin/sh
>   echo "my hostname is `hostname`" > /tmp/hostname
>   date > /tmp/date
>   ifconfig > /tmp/ifconfig
>   df -h > /tmp/diskinfo
> CP1:
>   type: tosca.nodes.nfv.CP.Tacker
>   properties:
> management: true
> order: 0
> anti_spoofing_protection: false
>   requirements:
> - virtualLink:
> node: VL1
> - virtualBinding:
> node: VDU1
>
>  VL1:
>   type: tosca.nodes.nfv.VL
>   properties:
> network_name: net_mgmt
> vendor: ACME
>
> Regards,
> -Vishnu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread John Dickinson


On 22 May 2017, at 15:50, Anne Gentle wrote:

> On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis 
> wrote:
>
>> On Mon, May 22, 2017 at 09:39:09AM +, Alexandra Settle wrote:
>>
>> [snip]
>>
>>> 1. We could combine all of the documentation builds, so that each
>> project has a single doc/source directory that includes developer,
>> contributor, and user documentation. This option would reduce the number of
>> build jobs we have to run, and cut down on the number of separate sphinx
>> configurations in each repository. It would completely change the way we
>> publish the results, though, and we would need to set up redirects from all
>> of the existing locations to the new locations and move all of the existing
>> documentation under the new structure.
>>>
>>> 2. We could retain the existing trees for developer and API docs, and
>> add a new one for "user" documentation. The installation guide,
>> configuration guide, and admin guide would move here for all projects.
>> Neutron's user documentation would include the current networking guide as
>> well. This option would add 1 new build to each repository, but would allow
>> us to easily roll out the change with less disruption in the way the site
>> is organized and published, so there would be less work in the short term.
>>>
>>> 3. We could do option 2, but use a separate repository for the new
>> user-oriented documentation. This would allow project teams to delegate
>> management of the documentation to a separate review project-sub-team, but
>> would complicate the process of landing code and documentation updates
>> together so that the docs are always up to date.
>>>
>>
>> I actually like the first two a little better, but I think this might
>> actually be the best option. My hope
>> would be that there could continue to be a docs team that can help out
>> with some of this, and by having a
>> separate repo it would allow usto set up separate teams with rights to
>> merge.
>>
>
> Hey Sean, is the "right to merge" the top difficulty you envision with 1 or
> 2? Or is it finding people to do the writing and reviews? Curious about
> your thoughts and if you have some experience with specific day-to-day
> behavior here, I would love your insights.
>
> Anne
>

I prefer option 1, which should be obvious from Anne's reference to my exiting 
work to enable that. Option 2 seems yucky (to me) because it adds yet another 
docs tree and sphinx config to projects, and thus is counter to my hope that 
we'll have one single docs tree per repo.

I disagree with option 3. It seems to be a way to organize the content simply 
to wall-off access to parts of it; e.g. docs people can't land stuff in the 
code part and potentially some code people can't land stuff in the docs part. 
However, docs should always land with the code that changed them. Separating 
the docs into a separate repo removes the ability to land docs with code.

I really like the plan Alex has described about docs team representatives 
participating more directly with the projects. If those representatives should 
be able to add a +2 or -2 to project patches, then make those representatives 
core reviewers for the respective project. Like every other core reviewer, they 
should be trusted to use good judgement for choosing what to review and what 
score to give it.

Let's work towards option 1. Although I think option 2 is largely orthogonal to 
option 1 (i.e. the "user" docs should be merged into the project trees 
regardless of unification of the various in-project docs trees), it can happen 
before or after option 1 is done.


--John



>
>>
>>> Personally, I think option 2 or 3 are more realistic, for now. It does
>> mean that an extra build would have to be maintained, but it retains that
>> key differentiator between what is user and developer documentation and
>> involves fewer changes to existing published contents and build jobs. I
>> definitely think option 1 is feasible, and would be happy to make it work
>> if the community prefers this. We could also view option 1 as the
>> longer-term goal, and option 2 as an incremental step toward it (option 3
>> would make option 1 more complicated to achieve).
>>>
>>> What does everyone think of the proposed options? Questions? Other
>> thoughts?
>>>
>>> Cheers,
>>>
>>> Alex
>>>
>>>
>>
>>> 
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> -- 
>
> Read my blog: justwrite.click 

Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-22 Thread ChangBo Guo
+1 , let's focus on key drivers.

2017-05-17 2:02 GMT+08:00 Joshua Harlow :

> Fine with me,
>
> I'd personally rather get down to say 2 'great' drivers for RPC,
>
> And say 1 (or 2?) for notifications.
>
> So ya, wfm.
>
> -Josh
>
>
> Mehdi Abaakouk wrote:
>
>> +1 too, I haven't seen its contributors since a while.
>>
>> On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:
>>
>>> On 15/05/17 15:29 -0500, Ben Nemec wrote:
>>>


 On 05/15/2017 01:55 PM, Doug Hellmann wrote:

> Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15
> 14:27:36 -0400:
>
>> On Mon, May 15, 2017 at 2:08 PM, Ken Giusti 
>> wrote:
>>
>>> Folks,
>>>
>>> It was decided at the oslo.messaging forum at summit that the pika
>>> driver will be marked as deprecated [1] for removal.
>>>
>>
>> [dims} +1 from me.
>>
>
> +1
>

 Also +1

>>>
>>> +1
>>>
>>> Flavio
>>>
>>> --
>>> @flaper87
>>> Flavio Percoco
>>>
>>
>>
>>
>> 
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Anne Gentle
On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis 
wrote:

> On Mon, May 22, 2017 at 09:39:09AM +, Alexandra Settle wrote:
>
> [snip]
>
> > 1. We could combine all of the documentation builds, so that each
> project has a single doc/source directory that includes developer,
> contributor, and user documentation. This option would reduce the number of
> build jobs we have to run, and cut down on the number of separate sphinx
> configurations in each repository. It would completely change the way we
> publish the results, though, and we would need to set up redirects from all
> of the existing locations to the new locations and move all of the existing
> documentation under the new structure.
> >
> > 2. We could retain the existing trees for developer and API docs, and
> add a new one for "user" documentation. The installation guide,
> configuration guide, and admin guide would move here for all projects.
> Neutron's user documentation would include the current networking guide as
> well. This option would add 1 new build to each repository, but would allow
> us to easily roll out the change with less disruption in the way the site
> is organized and published, so there would be less work in the short term.
> >
> > 3. We could do option 2, but use a separate repository for the new
> user-oriented documentation. This would allow project teams to delegate
> management of the documentation to a separate review project-sub-team, but
> would complicate the process of landing code and documentation updates
> together so that the docs are always up to date.
> >
>
> I actually like the first two a little better, but I think this might
> actually be the best option. My hope
> would be that there could continue to be a docs team that can help out
> with some of this, and by having a
> separate repo it would allow usto set up separate teams with rights to
> merge.
>

Hey Sean, is the "right to merge" the top difficulty you envision with 1 or
2? Or is it finding people to do the writing and reviews? Curious about
your thoughts and if you have some experience with specific day-to-day
behavior here, I would love your insights.

Anne


>
> > Personally, I think option 2 or 3 are more realistic, for now. It does
> mean that an extra build would have to be maintained, but it retains that
> key differentiator between what is user and developer documentation and
> involves fewer changes to existing published contents and build jobs. I
> definitely think option 1 is feasible, and would be happy to make it work
> if the community prefers this. We could also view option 1 as the
> longer-term goal, and option 2 as an incremental step toward it (option 3
> would make option 1 more complicated to achieve).
> >
> > What does everyone think of the proposed options? Questions? Other
> thoughts?
> >
> > Cheers,
> >
> > Alex
> >
> >
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Sean McGinnis
On Mon, May 22, 2017 at 09:39:09AM +, Alexandra Settle wrote:

[snip]

> 1. We could combine all of the documentation builds, so that each project has 
> a single doc/source directory that includes developer, contributor, and user 
> documentation. This option would reduce the number of build jobs we have to 
> run, and cut down on the number of separate sphinx configurations in each 
> repository. It would completely change the way we publish the results, 
> though, and we would need to set up redirects from all of the existing 
> locations to the new locations and move all of the existing documentation 
> under the new structure.
> 
> 2. We could retain the existing trees for developer and API docs, and add a 
> new one for "user" documentation. The installation guide, configuration 
> guide, and admin guide would move here for all projects. Neutron's user 
> documentation would include the current networking guide as well. This option 
> would add 1 new build to each repository, but would allow us to easily roll 
> out the change with less disruption in the way the site is organized and 
> published, so there would be less work in the short term.
> 
> 3. We could do option 2, but use a separate repository for the new 
> user-oriented documentation. This would allow project teams to delegate 
> management of the documentation to a separate review project-sub-team, but 
> would complicate the process of landing code and documentation updates 
> together so that the docs are always up to date.
> 

I actually like the first two a little better, but I think this might actually 
be the best option. My hope
would be that there could continue to be a docs team that can help out with 
some of this, and by having a
separate repo it would allow usto set up separate teams with rights to merge.

> Personally, I think option 2 or 3 are more realistic, for now. It does mean 
> that an extra build would have to be maintained, but it retains that key 
> differentiator between what is user and developer documentation and involves 
> fewer changes to existing published contents and build jobs. I definitely 
> think option 1 is feasible, and would be happy to make it work if the 
> community prefers this. We could also view option 1 as the longer-term goal, 
> and option 2 as an incremental step toward it (option 3 would make option 1 
> more complicated to achieve).
> 
> What does everyone think of the proposed options? Questions? Other thoughts?
> 
> Cheers,
> 
> Alex
> 
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] uWSGI help for Congress

2017-05-22 Thread Eric K
If someone out there knows uWSGI and has a couple spare cycles to help
Congress project, we'd super appreciate it.

The regular contributors to Congress don't have experience with uWSGI and
could definitely use some help getting started with this goal. Thanks a ton!

https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html

Eric
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Petr Kovar
On Mon, 22 May 2017 09:39:09 +
Alexandra Settle  wrote:

(...)

> Until this point, the documentation team has owned several manuals that 
> include
> content related to multiple projects, including an installation guide, admin
> guide, configuration guide, networking guide, and security guide. Because the
> team no longer has the resources to own that content, we want to invert the
> relationship between the doc team and project teams, so that we become 
> liaisons
> to help with maintenance instead of asking for project teams to provide 
> liaisons
> to help with content. As a part of that change, we plan to move the existing
> content out of the central manuals repository, into repositories owned by the
> appropriate project teams. Project teams will then own the content and the
> documentation team will assist by managing the build tools, helping with 
> writing
> guidelines and style, but not writing the bulk of the text.

First off, thanks a lot for sending this out!

If my understanding is correct, the openstack-manual repo would only store
static index pages and some configuration files? Everything under
https://github.com/openstack/openstack-manuals/tree/master/doc would be
moved to project repos? 

The installation guide is special in that project-specific in-tree guides
still depend on common content that currently lives in openstack-manuals.
Where would that common content go, then?

This includes installation guide sections such as:

https://docs.openstack.org/ocata/install-guide-rdo/overview.html
https://docs.openstack.org/ocata/install-guide-rdo/environment.html
https://docs.openstack.org/ocata/install-guide-rdo/launch-instance.html

Also, unlike the openstack-manual's installation guide content, the in-tree
guides do not use conditional content for different distributions. I assume
individual projects would need to maintain separate common content for
each distribution?

(...)

> 3. We could do option 2, but use a separate repository for the new 
> user-oriented
> documentation. This would allow project teams to delegate management of the
> documentation to a separate review project-sub-team, but would complicate the
> process of landing code and documentation updates together so that the docs 
> are
> always up to date.

If the intention here is to make the content more visible to developers who
work in project repos, then separating the content to a different repo
kind of goes against that idea, I think.

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-22 Thread Robert Li (baoli)
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-22 19:16:34 +:
> On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:
> > On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley  wrote:
> > > On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
> > > [...]
> > >> We ran into this for the puppet-module-build check job so I created a
> > >> puppet-agent-install builder.  Perhaps the job needs that added to it
> > > [...]
> > >
> > > Problem here being these repos share the common tarball jobs used
> > > for generating python sdists, with a little custom logic baked into
> > > run-tarball.sh[*] for detecting and adjusting when the repo is for a
> > > Puppet module. I think this highlights the need to create custom
> > > tarball jobs for Puppet modules, preferably by abstracting this
> > > custom logic into a new JJB builder.
> > 
> > I assume you mean a problem if we added this builder to the job
> > and it fails for some reason thus impacting the python jobs?
> 
> My concern is more that it increases complexity by further embedding
> package selection and installation choices into that already complex
> script. We'd (Infra team) like to get more of the logic out of that
> random pile of shell scripts and directly into job definitions
> instead. For one thing, those scripts are only updated when we
> regenerate our nodepool images (at best once a day) and leads to
> significant job inconsistencies if we have image upload failures in
> some providers but not others. In contrast, job configurations are
> updated nearly instantly (and can even be self-tested in many cases
> once we're on Zuul v3).
> 
> > As far as adding to the builder to the job that's not really a
> > problem and wouldn't change those jobs as they don't reference the
> > installed puppet executable.
> 
> It does risk further destabilizing the generic tarball jobs by
> introducing more outside dependencies which will only be used by a
> scant handful of the projects running them.
> 
> > The problem I have with putting this in the .sh is that it becomes
> > yet another place where we're doing this package installation (we
> > already do it in puppet openstack in
> > puppet-openstack-integration). I originally proposed the builder
> > because it could be reused if a job requires puppet be available.
> > ie. this case. I'd rather not do what we do in the builder in a
> > shell script in the job and it seems like this is making it more
> > complicated than it needs to be when we have to manage this in the
> > long term.
> 
> Agreed, I'm saying a builder which installs an unnecessary Puppet
> toolchain for the generic tarball jobs is not something we'd want,
> but it would be pretty trivial to make puppet-specific tarball jobs
> which do use that builder (and has the added benefit that
> Puppet-specific logic can be moved _out_ of run-tarballs.sh and into
> your job configuration instead at that point).

That approach makes sense.

When the new job template is set up, let me know so I can add it to the
release repo validation as a known way to release things.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-22 Thread Moshe Levi
Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Jeremy Stanley
On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:
> On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley  wrote:
> > On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
> > [...]
> >> We ran into this for the puppet-module-build check job so I created a
> >> puppet-agent-install builder.  Perhaps the job needs that added to it
> > [...]
> >
> > Problem here being these repos share the common tarball jobs used
> > for generating python sdists, with a little custom logic baked into
> > run-tarball.sh[*] for detecting and adjusting when the repo is for a
> > Puppet module. I think this highlights the need to create custom
> > tarball jobs for Puppet modules, preferably by abstracting this
> > custom logic into a new JJB builder.
> 
> I assume you mean a problem if we added this builder to the job
> and it fails for some reason thus impacting the python jobs?

My concern is more that it increases complexity by further embedding
package selection and installation choices into that already complex
script. We'd (Infra team) like to get more of the logic out of that
random pile of shell scripts and directly into job definitions
instead. For one thing, those scripts are only updated when we
regenerate our nodepool images (at best once a day) and leads to
significant job inconsistencies if we have image upload failures in
some providers but not others. In contrast, job configurations are
updated nearly instantly (and can even be self-tested in many cases
once we're on Zuul v3).

> As far as adding to the builder to the job that's not really a
> problem and wouldn't change those jobs as they don't reference the
> installed puppet executable.

It does risk further destabilizing the generic tarball jobs by
introducing more outside dependencies which will only be used by a
scant handful of the projects running them.

> The problem I have with putting this in the .sh is that it becomes
> yet another place where we're doing this package installation (we
> already do it in puppet openstack in
> puppet-openstack-integration). I originally proposed the builder
> because it could be reused if a job requires puppet be available.
> ie. this case. I'd rather not do what we do in the builder in a
> shell script in the job and it seems like this is making it more
> complicated than it needs to be when we have to manage this in the
> long term.

Agreed, I'm saying a builder which installs an unnecessary Puppet
toolchain for the generic tarball jobs is not something we'd want,
but it would be pretty trivial to make puppet-specific tarball jobs
which do use that builder (and has the added benefit that
Puppet-specific logic can be moved _out_ of run-tarballs.sh and into
your job configuration instead at that point).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][ironic][scheduler][placement] IMPORTANT: Getting rid of the automated reschedule functionality

2017-05-22 Thread Matt Riedemann

On 5/22/2017 1:50 PM, Matt Riedemann wrote:

On 5/22/2017 12:54 PM, Jay Pipes wrote:

Hi Ops,

I need your feedback on a very important direction we would like to 
pursue. I realize that there were Forum sessions about this topic at 
the summit in Boston and that there were some decisions that were 
reached.


I'd like to revisit that decision and explain why I'd like your 
support for getting rid of the automatic reschedule behaviour entirely 
in Nova for Pike.


== The current situation and why it sucks ==

Nova currently attempts to "reschedule" instances when any of the 
following events occur:


a) the "claim resources" process that occurs on the nova-compute 
worker results in the chosen compute node exceeding its own capacity


b) in between the time a compute node was chosen by the scheduler, 
another process launched an instance that would violate an affinity 
constraint


c) an "unknown" exception occurs during the spawn process. In 
practice, this really only is seen when the Ironic baremetal node that 
was chosen by the scheduler turns out to be unreliable (IPMI issues, 
BMC failures, etc) and wasn't able to launch the instance. [1]


The logic for handling these reschedules makes the Nova conductor, 
scheduler and compute worker code very complex. With the new cellsv2 
architecture in Nova, child cells are not able to communicate with the 
Nova scheduler (and thus "ask for a reschedule").


To be clear, they are able to communicate, and do, as long as you 
configure them to be able to do so. The long-term goal is that you don't 
have to configure them to be able to do so, so we're trying to design 
and work in that mode toward that goal.




We (the Nova team) would like to get rid of the automated rescheduling 
behaviour that Nova currently exposes because we could eliminate a 
large amount of complexity (which leads to bugs) from the 
already-complicated dance of communication that occurs between 
internal Nova components.


== What we would like to do ==

With the move of the resource claim to the Nova scheduler [2], we can 
entirely eliminate the a) class of Reschedule causes.


This leaves class b) and c) causes of Rescheduling.

For class b) causes, we should be able to solve this issue when the 
placement service understands affinity/anti-affinity (maybe 
Queens/Rocky). Until then, we propose that instead of raising a 
Reschedule when an affinity constraint was last-minute violated due to 
a racing scheduler decision, that we simply set the instance to an 
ERROR state.


Personally, I have only ever seen anti-affinity/affinity use cases in 
relation to NFV deployments, and in every NFV deployment of OpenStack 
there is a VNFM or MANO solution that is responsible for the 
orchestration of instances belonging to various service function 
chains. I think it is reasonable to expect the MANO system to be 
responsible for attempting a re-launch of an instance that was set to 
ERROR due to a last-minute affinity violation.


**Operators, do you agree with the above?**

Finally, for class c) Reschedule causes, I do not believe that we 
should be attempting automated rescheduling when "unknown" errors 
occur. I just don't believe this is something Nova should be doing.


I recognize that large Ironic users expressed their concerns about 
IPMI/BMC communication being unreliable and not wanting to have users 
manually retry a baremetal instance launch. But, on this particular 
point, I'm of the opinion that Nova just do one thing and do it well. 
Nova isn't an orchestrator, nor is it intending to be a "just 
continually try to get me to this eventual state" system like Kubernetes.


If we removed Reschedule for class c) failures entirely, large Ironic 
deployers would have to train users to manually retry a failed launch 
or would need to write a simple retry mechanism into whatever 
client/UI that they expose to their users.


**Ironic operators, would the above decision force you to abandon Nova 
as the multi-tenant BMaaS facility?**


Thanks in advance for your consideration and feedback.

Best,
-jay

[1] This really does not occur with any frequency for hypervisor virt 
drivers, since the exceptions those hypervisors throw are caught by 
the nova-compute worker and handled without raising a Reschedule.


Are you sure about that?

https://github.com/openstack/nova/blob/931c3f48188e57e71aa6518d5253e1a5bd9a27c0/nova/compute/manager.py#L2041-L2049 



The compute manager handles anything non-specific that leaks up from the 
virt driver.spawn() method and reschedules it. Think 
ProcessExecutionError when vif plugging fails in the libvirt driver 
because the command blew up for some reason (sudo on the host is 
wrong?). I'm not saying it should, as I'm guessing most of these types 
of failures are due to misconfiguration, but it is how things currently 
work today.




[2] 
http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-claims.html 




[openstack-dev] [os-upstream-institute] Meeting reminder :)

2017-05-22 Thread Ildiko Vancsa
Hi Team,

It is a friendly reminder that we have our meeting in less than an hour (2000 
UTC) on #openstack-meeting-3. :)

You can find the agenda here: 
https://etherpad.openstack.org/p/openstack-upstream-institute-meetings 
 

See you there soon!

Thanks,
Ildikó
IRC: ildikov__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Anne Gentle's message of 2017-05-22 12:36:29 -0500:
> > On May 22, 2017, at 9:09 AM, Doug Hellmann  wrote:
> >
> > Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
> >>> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
> >>> Hi everyone,
> >>>
> >>> The documentation team are rapidly losing key contributors and core 
> >>> reviewers.
> >>> We are not alone, this is happening across the board. It is making things
> >>> harder, but not impossible.
> >>>
> >>> Since our inception in 2010, we’ve been climbing higher and higher trying 
> >>> to
> >>> achieve the best documentation we could, and uphold our high standards. 
> >>> This is
> >>> something to be incredibly proud of.
> >>>
> >>> However, we now need to take a step back and realise that the amount of 
> >>> work we
> >>> are attempting to maintain is now out of reach for the team size that we 
> >>> have.
> >>> At the moment we have 13 cores, of whom none are full time contributors or
> >>> reviewers. This includes myself.
> >>>
> >>> Until this point, the documentation team has owned several manuals that 
> >>> include
> >>> content related to multiple projects, including an installation guide, 
> >>> admin
> >>> guide, configuration guide, networking guide, and security guide. Because 
> >>> the
> >>> team no longer has the resources to own that content, we want to invert 
> >>> the
> >>> relationship between the doc team and project teams, so that we become 
> >>> liaisons
> >>> to help with maintenance instead of asking for project teams to provide 
> >>> liaisons
> >>> to help with content. As a part of that change, we plan to move the 
> >>> existing
> >>> content out of the central manuals repository, into repositories owned by 
> >>> the
> >>> appropriate project teams. Project teams will then own the content and the
> >>> documentation team will assist by managing the build tools, helping with 
> >>> writing
> >>> guidelines and style, but not writing the bulk of the text.
> >>>
> >>> We currently have the infrastructure set up to empower project teams to 
> >>> manage
> >>> their own documentation in their own tree, and many do. As part of this 
> >>> change,
> >>> the rest of the existing content from the install guide and admin guide 
> >>> will
> >>> also move into project-owned repositories. We have a few options for how 
> >>> to
> >>> implement the move, and that's where we need feedback now.
> >>>
> >>> 1. We could combine all of the documentation builds, so that each project 
> >>> has a
> >>> single doc/source directory that includes developer, contributor, and user
> >>> documentation. This option would reduce the number of build jobs we have 
> >>> to run,
> >>> and cut down on the number of separate sphinx configurations in each 
> >>> repository.
> >>> It would completely change the way we publish the results, though, and we 
> >>> would
> >>> need to set up redirects from all of the existing locations to the new
> >>> locations and move all of the existing documentation under the new 
> >>> structure.
> >>>
> >>> 2. We could retain the existing trees for developer and API docs, and add 
> >>> a new
> >>> one for "user" documentation. The installation guide, configuration 
> >>> guide, and
> >>> admin guide would move here for all projects. Neutron's user 
> >>> documentation would
> >>> include the current networking guide as well. This option would add 1 new 
> >>> build
> >>> to each repository, but would allow us to easily roll out the change with 
> >>> less
> >>> disruption in the way the site is organized and published, so there would 
> >>> be
> >>> less work in the short term.
> >>>
> >>> 3. We could do option 2, but use a separate repository for the new 
> >>> user-oriented
> >>> documentation. This would allow project teams to delegate management of 
> >>> the
> >>> documentation to a separate review project-sub-team, but would complicate 
> >>> the
> >>> process of landing code and documentation updates together so that the 
> >>> docs are
> >>> always up to date.
> >>>
> >>> Personally, I think option 2 or 3 are more realistic, for now. It does 
> >>> mean
> >>> that an extra build would have to be maintained, but it retains that key
> >>> differentiator between what is user and developer documentation and 
> >>> involves
> >>> fewer changes to existing published contents and build jobs. I definitely 
> >>> think
> >>> option 1 is feasible, and would be happy to make it work if the community
> >>> prefers this. We could also view option 1 as the longer-term goal, and 
> >>> option 2
> >>> as an incremental step toward it (option 3 would make option 1 more 
> >>> complicated
> >>> to achieve).
> >>>
> >>> What does everyone think of the proposed options? Questions? Other 
> >>> thoughts?
> >>
> >> We're already hosting install-guide and api-ref in our tree, and I'd 
> >> prefer we
> >> don't change it, as it's going to be annoying (especially wrt backports). 
> 

Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-22 10:42:44 -0700:
> [snip]
> 
> So from Kolla perspective, since our dev guide is really also
> operators guide (we are operators tool so we're kinda "special" on
> that front), we'd love to handle both deployment guide, user manuals
> and all that in our tree. If we could create infrastructure that would
> allow us to segregate our content and manage it ourselves, I think
> that would be useful. Tell us how to help:)
> 
> Cheers,
> Michal
> 

The first step is to choose one of the options Alex proposed. From
there, we'll work out more detailed steps for achieving that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Alex Schultz
On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley  wrote:
> On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
> [...]
>> We ran into this for the puppet-module-build check job so I created a
>> puppet-agent-install builder.  Perhaps the job needs that added to it
> [...]
>
> Problem here being these repos share the common tarball jobs used
> for generating python sdists, with a little custom logic baked into
> run-tarball.sh[*] for detecting and adjusting when the repo is for a
> Puppet module. I think this highlights the need to create custom
> tarball jobs for Puppet modules, preferably by abstracting this
> custom logic into a new JJB builder.
>

I assume you mean a problem if we added this builder to the job and it
fails for some reason thus impacting the python jobs?  As far as
adding to the builder to the job that's not really a problem and
wouldn't change those jobs as they don't reference the installed
puppet executable.  The problem I have with putting this in the .sh is
that it becomes yet another place where we're doing this package
installation (we already do it in puppet openstack in
puppet-openstack-integration). I originally proposed the builder
because it could be reused if a job requires puppet be available. ie.
this case.  I'd rather not do what we do in the builder in a shell
script in the job and it seems like this is making it more complicated
than it needs to be when we have to manage this in the long term.

Thanks,
-Alex

> [*]  https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/run-tarball.sh?id=a2b9e37#n17
>  >
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-22 Thread Miguel Lavalle
Project: Neutron
Attendees: ~15

Neutron's session was a combination of a slide presentation (
https://www.slideshare.net/MiguelLavalle/openstack-neutron-new-developers-on-boarding)
with predefined exercises on the DevStack VM that was used during the
OpenStack Upstream Institute the weekend prior to the Summit. This is what
we did in more detail:

   - Introduce team members present in the room: Kevin Benton, Armando
   Migliaccio, Swaminathan Vasudevan and Brian Haley, We thought this was
   important to send the message that we are an open and welcoming community /
   project / team.
   - Quick overview of Neutron team organization, IRC meetings and the
   concept of the Neutron Stadium of related projects. We also showed the
   project's mascot and handed out stickers.
   - We didn't want to make any assumptions as to prior knowledge of the
   attendees, so we started from the beginning. We reviewed the concepts
   associated to ReST APIs from the point of view of Neutron. We gave them the
   exercise to create and update a port using the OpenStack client with the
   --debug option and then we reviewed the different pieces of the requests
   and responses: HTTP verb, Neutron endpoint, URI, response code, etc. We
   used annotated slides with examples to show these pieces.
   - Neutron's plug-in based architecture, core resources, core plug-in,
   extensions and service plug-ins. The exercise was to list the extensions
   configured in their DevStacks, set-up a new extension in the configuration
   files, re-start the Neutron server and see the attributes added by the new
   extension to ports using the client.
   - Back-end implementation: L2 agent. With graphic slides we reviewed how
   a port is connected to a virtual network using the integration bridge, the
   other bridges that are part of the landscape and we followed the flow of
   the L2 agent wiring a port for Nova. The exercise was to boot an instance,
   use ovs-vsctl and brctl to see how the port was wired and looked at related
   pieces of code in the OVS agent and RPC classes.
   - Back-end implementation: L3 agent. With graphic slides we reviewed how
   routers and floating ips are processed and the different types of routers
   (legacy, DVR, HA, etc.). The exercise was to associate a floating ip to the
   port of the instance created in the previous exercise and using
   iptables-save, examine the entries added by the floating ip creation. We
   also looked at relevant code in the agent and  RPC classes.
   - The ML2 plug-in. We reviewed the relationship of the ML2 plug-in and
   the DB plug-in and then, using slides with annotated pseudo-code, went over
   the inner working of the ML2 plug-in: the initiation of the DB transaction,
   pre-commit and post-commit mechanism driver methods, network and port
   contexts, type drivers, port binding, the creation of the response
   dictionary and how all these elements contribute and can affect the DB
   performance. The exercise was to review actual code and then add a
   LOG.debug statement to log the vif_type attribute resulting from a port
   binding.

It is important to mention that the 90 minutes originally scheduled weren't
long enough to cover all these topics. Since the level of interest was so
high among the audience, we decided to try to get together the following
day. With the help of the Foundation support team, we were able to schedule
a 1 hour follow up session that was attended by about a third of the
audience and where we were able to finish all the agenda.


What went well:

   - Current Neutron team members welcoming the on-boarding attendees.
   - The fact that audience members actually showed up for a follow up
   session on Thursday at 3pm and their comments at the end (there was even
   some clapping), suggests that the combination of practical exercises and
   the slides did a good job training the prospective new team team members.
   - The prompt response of the Foundation team to schedule a follow up
   session.


What needs improvement:

   - Make the OpenStack Up-Stream Institute DevStack an explicit requisite
   for the on-boarding session. While many of our attendees had the VM, many
   didn't. I think the importance of this is illustrated by the fact that the
   people motivated enough to show up for the follow up session all had the VM
   in their laptops and followed the exercise to the very end.
   - More time. In our experience, a 3 hours session with a break would be
   ideal. Given the importance for the community of bringing in new developers
   and the fact that our audience was willing to attend a follow up session, a
   3 hours up front investment in new talent seems reasonable.


On Mon, May 22, 2017 at 8:11 AM, Alexandra Settle 
wrote:

> Project: Documentation and I18N
> Attendees: 3-5 (maybe?)
> Etherpad: https://etherpad.openstack.org/p/doc-onboarding
>
> What we did:
>
> We ran the session informally based off whoever was 

Re: [openstack-dev] [trove] Trove reboot meeting

2017-05-22 Thread Jeremy Stanley
On 2017-05-22 17:42:09 + (+), MCCASLAND, TREVOR wrote:
> Thanks Jeremy, that will work for our updated weekly meeting.
> 
> To clear any confusion, this is a one-time project-scope meeting
> focused on hearing the needs of the new contributors we have as
> well as trove's current goals. It's supposed to be separate from
> the regular meeting.

Oops, thanks for the clarification. I had misunderstood this as part
of the Trove weekly meeting rescheduling, but now I see the "reboot"
meeting is something separate.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Regarding where to host Redfish vendor extensions code for Ironic

2017-05-22 Thread Shivanand Tendulker
Hi All

This is wrt discussion we had about where to host the Redfish vendor
vendor extensions. Should it be sub-module within sushy-tree or
separate vendor project(s).

The following were the opinions:-
In favor of Sushy  sub-module:-
1. People see "vendor specific" repo's as owned by the vendor and it
puts of contributors that aren't the vendor but happen to have that
vendors hardware leading to low-contribution rate
2. Lot of code duplication across vendor libs as extensions could be similar.
3. Code leverage would be easier.

In favor of separate vendor project(s):-
1. Each vendor has their own unique hardware or firmware features
enabled through extensions
2. Number of extensions could be large.
3. Ironic cores may find it tedious to review vendor extensions and
also add into their pile of work
4. It gives flexibility to individual vendor to own and maintain its
own extensions

Detailed discussion :
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2017-05-22.log.html#t2017-05-22T17:25:13

Please let us know your opinion/comments on the same.

Thanks and Regards
stendulker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Meeting Today!

2017-05-22 Thread Kendall Nelson
Hello Everyone,

Its a pretty short agenda[1] today so feel free to add things you think we
need to discuss :)

See you in #openstack-meeting-3 at 20:00 UTC!

-Kendall (diablo_rojo)

[1]https://etherpad.openstack.org/p/openstack-upstream-institute-meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-22 Thread Robert Li (baoli)
Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][stable] request for stable team to review the requirements queue

2017-05-22 Thread Matthew Thode
https://review.openstack.org/#/q/project:openstack/requirements+status:open+NOT+branch:master

We currently do not have anyone able to review/workflow these items and
it'd be nice if we could get some eyes on these reviews.

Thanks,

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Michał Jastrzębski
[snip]

So from Kolla perspective, since our dev guide is really also
operators guide (we are operators tool so we're kinda "special" on
that front), we'd love to handle both deployment guide, user manuals
and all that in our tree. If we could create infrastructure that would
allow us to segregate our content and manage it ourselves, I think
that would be useful. Tell us how to help:)

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Trove reboot meeting

2017-05-22 Thread MCCASLAND, TREVOR
Thanks Jeremy, that will work for our updated weekly meeting.

To clear any confusion, this is a one-time project-scope meeting focused on 
hearing the needs of the new contributors we have as well as trove's current 
goals. It's supposed to be separate from the regular meeting.

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Monday, May 22, 2017 12:30 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [trove] Trove reboot meeting

On 2017-05-22 16:50:10 + (+), MCCASLAND, TREVOR wrote:
[...]
> Sharing google calendar events can be a pain but it only gives us a 
> notification for our calendars so.. I'm not going to bother with it 
> that much. If you really want it on your calendar you can email me and 
> I will add you to the event but you don' need to do this.

Remember, http://eavesdrop.openstack.org/#Trove_(DBaaS)_Team_Meeting
also has a ICS file link for this purpose. Anyone can retrieve that file and 
add the recurring appointment to their calendars regardless of what scheduling 
software they use (as long as it supports ICS import, which nearly all do 
directly these days).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Anne Gentle
> On May 22, 2017, at 9:09 AM, Doug Hellmann  wrote:
>
> Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
>>> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
>>> Hi everyone,
>>>
>>> The documentation team are rapidly losing key contributors and core 
>>> reviewers.
>>> We are not alone, this is happening across the board. It is making things
>>> harder, but not impossible.
>>>
>>> Since our inception in 2010, we’ve been climbing higher and higher trying to
>>> achieve the best documentation we could, and uphold our high standards. 
>>> This is
>>> something to be incredibly proud of.
>>>
>>> However, we now need to take a step back and realise that the amount of 
>>> work we
>>> are attempting to maintain is now out of reach for the team size that we 
>>> have.
>>> At the moment we have 13 cores, of whom none are full time contributors or
>>> reviewers. This includes myself.
>>>
>>> Until this point, the documentation team has owned several manuals that 
>>> include
>>> content related to multiple projects, including an installation guide, admin
>>> guide, configuration guide, networking guide, and security guide. Because 
>>> the
>>> team no longer has the resources to own that content, we want to invert the
>>> relationship between the doc team and project teams, so that we become 
>>> liaisons
>>> to help with maintenance instead of asking for project teams to provide 
>>> liaisons
>>> to help with content. As a part of that change, we plan to move the existing
>>> content out of the central manuals repository, into repositories owned by 
>>> the
>>> appropriate project teams. Project teams will then own the content and the
>>> documentation team will assist by managing the build tools, helping with 
>>> writing
>>> guidelines and style, but not writing the bulk of the text.
>>>
>>> We currently have the infrastructure set up to empower project teams to 
>>> manage
>>> their own documentation in their own tree, and many do. As part of this 
>>> change,
>>> the rest of the existing content from the install guide and admin guide will
>>> also move into project-owned repositories. We have a few options for how to
>>> implement the move, and that's where we need feedback now.
>>>
>>> 1. We could combine all of the documentation builds, so that each project 
>>> has a
>>> single doc/source directory that includes developer, contributor, and user
>>> documentation. This option would reduce the number of build jobs we have to 
>>> run,
>>> and cut down on the number of separate sphinx configurations in each 
>>> repository.
>>> It would completely change the way we publish the results, though, and we 
>>> would
>>> need to set up redirects from all of the existing locations to the new
>>> locations and move all of the existing documentation under the new 
>>> structure.
>>>
>>> 2. We could retain the existing trees for developer and API docs, and add a 
>>> new
>>> one for "user" documentation. The installation guide, configuration guide, 
>>> and
>>> admin guide would move here for all projects. Neutron's user documentation 
>>> would
>>> include the current networking guide as well. This option would add 1 new 
>>> build
>>> to each repository, but would allow us to easily roll out the change with 
>>> less
>>> disruption in the way the site is organized and published, so there would be
>>> less work in the short term.
>>>
>>> 3. We could do option 2, but use a separate repository for the new 
>>> user-oriented
>>> documentation. This would allow project teams to delegate management of the
>>> documentation to a separate review project-sub-team, but would complicate 
>>> the
>>> process of landing code and documentation updates together so that the docs 
>>> are
>>> always up to date.
>>>
>>> Personally, I think option 2 or 3 are more realistic, for now. It does mean
>>> that an extra build would have to be maintained, but it retains that key
>>> differentiator between what is user and developer documentation and involves
>>> fewer changes to existing published contents and build jobs. I definitely 
>>> think
>>> option 1 is feasible, and would be happy to make it work if the community
>>> prefers this. We could also view option 1 as the longer-term goal, and 
>>> option 2
>>> as an incremental step toward it (option 3 would make option 1 more 
>>> complicated
>>> to achieve).
>>>
>>> What does everyone think of the proposed options? Questions? Other thoughts?
>>
>> We're already hosting install-guide and api-ref in our tree, and I'd prefer 
>> we
>> don't change it, as it's going to be annoying (especially wrt backports). I'd
>> prefer we create user-guide directory in projects, and move the user guide 
>> there.
>
> Handling backports with a merged guide is an issue we didn't come
> up with in our earlier discussions. How often do you backport doc
> changes in practice? Do you foresee merge conflicts caused by issues
> other than the files being renamed?
>

For 

Re: [openstack-dev] [trove] Trove reboot meeting

2017-05-22 Thread MCCASLAND, TREVOR
Justin, that time is just as good. 

Thursday 1400 UTC will be the time for the meeting unless we have any other 
objections.

From: Justin Cook [mailto:jhc...@secnix.com] 
Sent: Monday, May 22, 2017 12:18 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [trove] Trove reboot meeting

Trevor,

1500 UTC is 1600 BST and 1700 CEST which is quite late for the French. Our 
friends in Berlin may be up for it. I recommend 1500 BST which is 1400 UTC. Of 
course, your suggestion works well for the US west coast. 

Cheers,

Justin
On 22 May 2017 at 17:50:10, MCCASLAND, TREVOR (mailto:tm2...@att.com) wrote:
The results show a meeting time for Thursday 1500 UTC; I'll message the join 
link on this thread 15 min prior and in the #openstack-trove 

It has come to my attention that the poll time zones were not clear, I was 
under the impression everyone had their own view based on their IP's location. 
However, I think everyone made the correct assumptions and we have agreed to 
meet at 1500 UTC on Thursday. 

If you have any questions you can ask them on this thread or raise them during 
the trove meeting this Wednesday at 1500-1600 UTC Wednesday. 

Sharing google calendar events can be a pain but it only gives us a 
notification for our calendars so.. I'm not going to bother with it that much. 
If you really want it on your calendar you can email me and I will add you to 
the event but you don' need to do this. 

-Original Message- 
From: MCCASLAND, TREVOR 
Sent: Friday, May 19, 2017 2:35 PM 
To: OpenStack Development Mailing List (not for usage questions) 
 
Subject: [openstack-dev] [trove] Trove reboot meeting 

*** Security Advisory: This Message Originated Outside of AT ***. 
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information. 

As a result of a large number of new contributors looking for direction from 
our project, we would like to host a focused meeting on the project's scope. 

Please let us know your availability for this one time meeting by using this 
doodle poll[1] 

We have brainstormed a few ideas for discussion [2], the project's scope is not 
limited to these ideas so if you would like to include something, please add 
it. 

Traditionally these kind of meetings are done at the PTG but we wanted to get 
ahead of that timeline to keep the interest going. 

The current meeting time and place is not decided yet but it will most likely 
be an impromptu virtual meeting on google hangouts or some variant but we will 
also try our best to loop the conversation back in to the mailing list, our 
channel #openstack-trove and/or our project's meeting time. 

When the time is right, probably Tuesday morning. I will announce what time 
works the best for everyone, how and where to participate. 

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__beta.doodle.com_poll_s36ywdz5mfwqkdvu=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=PpMZu9e7ngwvol1PXZjsHpdi0zgnchuPkLcmIP59-9o=
 
[2] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_trove-2Dreboot=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=sOtPMJkzBnO0cjt3GBq3sNPQIxwn5UjkuPBENK0QhEA=
 


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__OpenStack-2Ddev-2Drequest-40lists.openstack.org-3Fsubject-3Aunsubscribe=DwMFaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=5a5CFTGEvXlDk9pWwNf-JCKgD0TYbElZ8yuQUk8agAA=3D6biqakOTH7JLr3TLKXD_qYy-R0wdUOySEYBwsW8Xw=
 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=lILpFrPj5ir_oBfPpkObf8xbDagrrKcF2ltvuNjQD5k=
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__OpenStack-2Ddev-2Drequest-40lists.openstack.org-3Fsubject-3Aunsubscribe=DwMFaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=5a5CFTGEvXlDk9pWwNf-JCKgD0TYbElZ8yuQUk8agAA=3D6biqakOTH7JLr3TLKXD_qYy-R0wdUOySEYBwsW8Xw=
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwMFaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=5a5CFTGEvXlDk9pWwNf-JCKgD0TYbElZ8yuQUk8agAA=T1S9QVhWjtdJV2qVToaDHBVo0F_QEVTqrW4wt1fVtp4=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [trove] Trove reboot meeting

2017-05-22 Thread Jeremy Stanley
On 2017-05-22 16:50:10 + (+), MCCASLAND, TREVOR wrote:
[...]
> Sharing google calendar events can be a pain but it only gives us
> a notification for our calendars so.. I'm not going to bother with
> it that much. If you really want it on your calendar you can email
> me and I will add you to the event but you don' need to do this.

Remember, http://eavesdrop.openstack.org/#Trove_(DBaaS)_Team_Meeting
also has a ICS file link for this purpose. Anyone can retrieve that
file and add the recurring appointment to their calendars regardless
of what scheduling software they use (as long as it supports ICS
import, which nearly all do directly these days).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Trove reboot meeting

2017-05-22 Thread Justin Cook
Trevor,

1500 UTC is 1600 BST and 1700 CEST which is quite late for the French. Our
friends in Berlin may be up for it. I recommend 1500 BST which is 1400 UTC.
Of course, your suggestion works well for the US west coast.

Cheers,

Justin

On 22 May 2017 at 17:50:10, MCCASLAND, TREVOR (tm2...@att.com) wrote:

> The results show a meeting time for Thursday 1500 UTC; I'll message the
> join link on this thread 15 min prior and in the #openstack-trove
>
> It has come to my attention that the poll time zones were not clear, I was
> under the impression everyone had their own view based on their IP's
> location.
> However, I think everyone made the correct assumptions and we have agreed
> to meet at 1500 UTC on Thursday.
>
> If you have any questions you can ask them on this thread or raise them
> during the trove meeting this Wednesday at 1500-1600 UTC Wednesday.
>
> Sharing google calendar events can be a pain but it only gives us a
> notification for our calendars so.. I'm not going to bother with it that
> much. If you really want it on your calendar you can email me and I will
> add you to the event but you don' need to do this.
>
> -Original Message-
> From: MCCASLAND, TREVOR
> Sent: Friday, May 19, 2017 2:35 PM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [trove] Trove reboot meeting
>
> *** Security Advisory: This Message Originated Outside of AT ***.
> Reference http://cso.att.com/EmailSecurity/IDSP.html for more
> information.
>
> As a result of a large number of new contributors looking for direction
> from our project, we would like to host a focused meeting on the project's
> scope.
>
> Please let us know your availability for this one time meeting by using
> this doodle poll[1]
>
> We have brainstormed a few ideas for discussion [2], the project's scope
> is not limited to these ideas so if you would like to include something,
> please add it.
>
> Traditionally these kind of meetings are done at the PTG but we wanted to
> get ahead of that timeline to keep the interest going.
>
> The current meeting time and place is not decided yet but it will most
> likely be an impromptu virtual meeting on google hangouts or some variant
> but we will also try our best to loop the conversation back in to the
> mailing list, our channel #openstack-trove and/or our project's meeting
> time.
>
> When the time is right, probably Tuesday morning. I will announce what
> time works the best for everyone, how and where to participate.
>
> [1]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__beta.doodle.com_poll_s36ywdz5mfwqkdvu=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=PpMZu9e7ngwvol1PXZjsHpdi0zgnchuPkLcmIP59-9o=
> [2]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_trove-2Dreboot=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=sOtPMJkzBnO0cjt3GBq3sNPQIxwn5UjkuPBENK0QhEA=
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=lILpFrPj5ir_oBfPpkObf8xbDagrrKcF2ltvuNjQD5k=
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Sample Environment Generator

2017-05-22 Thread Ben Nemec
I've finally gotten back to working on this spec[1] and wanted to send a 
quick update.  The patch series is updated with the changes we discussed 
at the PTG, and you can find the start of it here: 
https://review.openstack.org/#/c/253638/


Obviously there are a _lot_ of environments left to convert, but I've 
tried to do a subset that demonstrate the various uses of the tool and 
validates that it can work even for more complex environments.  It's at 
a point now where I'm pretty happy with it so I would like some outside 
feedback to see if it addresses everyone's use cases.


A few things worth calling out:
-I special-cased some parameters, in two categories.  First is private 
variables that can't be renamed because they're either part of the 
public api of the roles/services or because they're Heat things that we 
don't control.  This includes things like NodeIndex, servers, and 
DefaultPasswords.  The other category is things that most environments 
shouldn't be touching, but aren't strictly "private".  EndpointMap is a 
specific example.  We do need to include that in some environments, but 
a lot of templates take that as input and by default we don't want it 
exposed as part of the interface.  My solution to this latter case was 
to exclude these params when the "all" parameter list is being used, but 
if the param is explicitly referenced in the config then it will be 
included.
-I've imposed a directory structure on the new generated templates and 
deprecated the old flat files.  If you have any thoughts on what the 
structure should be, please comment now or forever hold your peace. :-) 
It's problematic to move files around since this is sort of a public 
interface to TripleO, so it would be nice to define a once-and-forever 
structure now.
-I'll be writing a conversion guide, since there are some gotchas and 
things to keep in mind while converting environments to the tool.


I think that covers the highlights so I'll stop typing and let you take 
a look. :-)


-Ben

1: 
https://specs.openstack.org/openstack/tripleo-specs/specs/pike/environment-generator.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Trove reboot meeting

2017-05-22 Thread MCCASLAND, TREVOR
The results show a meeting time for Thursday 1500 UTC; I'll message the join 
link on this thread 15 min prior and in the #openstack-trove

It has come to my attention that the poll time zones were not clear, I was 
under the impression everyone had their own view based on their IP's location.
However, I think everyone made the correct assumptions and we have agreed to 
meet at 1500 UTC on Thursday.

If you have any questions you can ask them on this thread or raise them during 
the trove meeting this Wednesday at 1500-1600 UTC Wednesday.

Sharing google calendar events can be a pain but it only gives us a 
notification for our calendars so.. I'm not going to bother with it that much. 
If you really want it on your calendar you can email me and I will add you to 
the event but you don' need to do this.

-Original Message-
From: MCCASLAND, TREVOR 
Sent: Friday, May 19, 2017 2:35 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [trove] Trove reboot meeting

*** Security Advisory: This Message Originated Outside of AT ***.
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

As a result of a large number of new contributors looking for direction from 
our project, we would like to host a focused meeting on the project's scope. 

Please let us know your availability for this one time meeting by using this 
doodle poll[1]

We have brainstormed a few ideas for discussion [2], the project's scope is not 
limited to these ideas so if you would like to include something, please add it.

Traditionally these kind of meetings are done at the PTG but we wanted to get 
ahead of that timeline to keep the interest going.

The current meeting time and place is not decided yet but it will most likely 
be an impromptu virtual meeting on google hangouts or some variant but we will 
also try our best to loop the conversation back in to the mailing list, our 
channel #openstack-trove and/or our project's meeting time.

When the time is right, probably Tuesday morning. I will announce what time 
works the best for everyone, how and where to participate. 

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__beta.doodle.com_poll_s36ywdz5mfwqkdvu=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=PpMZu9e7ngwvol1PXZjsHpdi0zgnchuPkLcmIP59-9o=
 
[2] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_trove-2Dreboot=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=sOtPMJkzBnO0cjt3GBq3sNPQIxwn5UjkuPBENK0QhEA=
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=grsR5Fg0eE2KeSRz0GJGD3ynjfBCiSUG8JWqdxVdI4o=lILpFrPj5ir_oBfPpkObf8xbDagrrKcF2ltvuNjQD5k=
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-22 Thread Lance Haig

Hi,


On 22.05.17 10:43, Thomas Herve wrote:

On Fri, May 19, 2017 at 5:00 PM, Lance Haig  wrote:

Hi,

Hi Lance,

Thanks for starting this. Comments inline.


As we know the heat-templates repository has become out of date in some
respects and also has been difficult to be maintained from a community
perspective.

While it has been out of date, I'm not sure it's because it's been
difficult. We just don't have the manpower or didn't dedicate enough
time to it.

That is why I would be able to assist :-)

I want to try and organise things so that when there is a new feature or 
heat version we know where we need to go edit and create and the rest 
will fall into deprecated as time goes on.



For me the repository is quiet confusing with different styles that are used
to show certain aspects and other styles for older template examples.

This I think leads to confusion and perhaps many people who give up on heat
as a resource as things are not that clear.

 From discussions in other threads and on the IRC channel I have seen that
there is a need to change things a bit.


This is why I would like to start the discussion that we rethink the
template example repository.

I would like to open the discussion with mys suggestions.

We need to differentiate templates that work on earlier versions of heat
that what is the current supported versions.

I have suggested that we create directories that relate to different
versions so that you can create a stable version of examples for the heat
version and they should always remain stable for that version and once it
goes out of support can remain there.
This would mean people can find their version of heat and know these
templates all work on their version

So, a couple of things:
* Templates have a version field. This clearly shows on which version
that template ought to work.
New people to heat and openstack just know that they are running mitaka 
newton ocata etc.. they don't know the heat version.
I also asked the other day if there is a list of heat version matched to 
Openstack version and I was told that there is not.



* Except when some resources changed (neutron loadbalancer, some
ceilometer stuff), old templates should still work. If they don't,
it's a bug. Obviously we won't fix it on unmaintained versions, but we
work really hard at maintaining compatibility. I'd be surprised to
find templates that are really broken.
I am sure that there is a lot of work being done on heat and I as one of 
the guys who uses heat appreciate this quite a bit.
I am working on trying to get some hardware so I can spin up different 
versions of heat and then test our templates against them to see what 
breaks or is broken and what we need to change or log bug reports for.

I am just waiting to see if I can arrange this.


It'd probably be nice to update all templates to the latest supported
version. But we don't remove old versions of templates, so it's also
good to keep them around, if updating the versions doesn't bring
anything new.
This is what I would like to do for the project. However I want to take 
a fresh look at how we have the templates listed and shown. I would like 
to make it easy for people who don't know heat to get started really 
quickly.

We should consider adding a docs section that that includes training for new
users.

I know that there are documents hosted in the developer area and these could
be utilized but I would think having a documentation section in the
repository would be a good way to keep the examples and the documents in the
same place.
This docs directory could also host some training for new users and old ones
on new features etc.. In a similar line to what is here in this repo
https://github.com/heat-extras/heat-tutorial

I'd rather see documentation in the main repository. It's nice to have
some stuff in heat-templates, but there is little point if the doc
isn't published anywhere. Maybe we could have links?
I am open to suggestions on this as I know when I was trying to learn 
how to use heat the reliable up-to date documentation was not something 
easily found and it took working with 3 or 4 blog posts to get my head 
around heat and then it took much longer to get to grips with 
SoftwareDeployment.
We have also been trying to educate our users on using heat and have 
found that the documentation does not help complete newbies to get started.
This is why my colleague Florin created the training in the link I 
posted before. This we have found is easier to digest for newbies and 
engineering who have skills on other platforms.

We should include examples form the default hooks e.g. ansible salt etc...
with SoftwareDeployments.

We found this quiet helpful for new users to understand what is possible.

We have those AFAIU:
https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates
We did use these to educate ourselves before we started thinking of our 
library.
It would be good to 

Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Jeremy Stanley
On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
[...]
> We ran into this for the puppet-module-build check job so I created a
> puppet-agent-install builder.  Perhaps the job needs that added to it
[...]

Problem here being these repos share the common tarball jobs used
for generating python sdists, with a little custom logic baked into
run-tarball.sh[*] for detecting and adjusting when the repo is for a
Puppet module. I think this highlights the need to create custom
tarball jobs for Puppet modules, preferably by abstracting this
custom logic into a new JJB builder.

[*] https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/run-tarball.sh?id=a2b9e37#n17
 >
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-22 Thread Zane Bitter

On 19/05/17 17:59, Matt Riedemann wrote:

On 5/19/2017 9:36 AM, Zane Bitter wrote:


The problem is that orchestration done inside APIs is very easy to do
badly in ways that cause lots of downstream pain for users and
external orchestrators. For example, Nova already does some
orchestration: it creates a Neutron port for a server if you don't
specify one. (And then promptly forgets that it has done so.) There is
literally an entire inner platform, an orchestrator within an
orchestrator, inside Heat to try to manage the fallout from this. And
the inner platform shares none of the elegance, such as it is, of Heat
itself, but is rather a collection of cobbled-together hacks to deal
with the seemingly infinite explosion of edge cases that we kept
running into over a period of at least 5 releases.


I'm assuming you're talking about how nova used to (years ago) not keep
track of which ports it created and which ones were provided when
creating a server or attaching ports to an existing server. That was
fixed quite awhile ago, so I assume anything in Heat at this point is no
longer necessary and if it is, then it's a bug in nova. i.e. if you
provide a port when creating a server, when you delete the server, nova
should not delete the port. If nova creates the port and you delete the
server, nova should then delete the port also.


Yeah, you're right, I believe that (long-fixed) bug may have been the 
genesis of it: https://bugs.launchpad.net/nova/+bug/1158684 but I could 
be mixing some issues up in my head, because I personally haven't done a 
lot of reviews in this specific area of the code.


Here is the most recent corner-case fix, which is a good example some of 
the subtleties involved in managing a combination of explicit and 
'magical' interactions with other resources:


https://review.openstack.org/#/c/450724/2/heat/engine/resources/openstack/nova/server_network_mixin.py


The get-me-a-network thing is... better, but there's no provision for
changes after the server is created, which means we have to copy-paste
the Nova implementation into Heat to deal with update.[1] Which sounds
like a maintenance nightmare in the making. That seems to be a common
mistake: to assume that once users create something they'll never need
to touch it again, except to delete it when they're done.


I'm not really sure what you're referring to here with 'update' and [1].
Can you expand on that? I know it's a bit of a tangent.


If the user does a stack update that changes the network from 'auto' to 
'none', or vice-versa.



Don't even get me started on Neutron.[2]

Any orchestration that is done behind-the-scenes needs to be done
superbly well, provide transparency for external orchestration tools
that need to hook in to the data flow, and should be developed in
consultation with potential consumers like Shade and Heat.


Agree, this is why we push back on baking in more orchestration into
Nova, because we generally don't do it well, or don't test it well, and
end up having half-baked things which are a constant source of pain,
e.g. boot from volume - that might work fine when creating and deleting
a server, but what happens when you try to migrate, resize, rebuild,
evacuate or shelve that server?


Yeah, exactly. There is a really long tail of stuff that is easy to forget.


Am I missing the point, or is the pendulum really swinging away from
PaaS layer services which abstract the dirty details of the lower-level
IaaS APIs? Or was this always something people wanted and I've just
never made the connection until now?


(Aside: can we stop using the term 'PaaS' to refer to "everything that
Nova doesn't do"? This habit is not helping us to communicate clearly.)


Sorry, as I said in response to sdague elsewhere in this thread, I tend
to lump PaaS and orchestration / porcelain tools together, but that's
not my intent in starting this thread. I was going to say we should have
a glossary for terms in OpenStack, but we do, and both are listed. :)

https://docs.openstack.org/user-guide/common/glossary.html


Hmm, I don't love the example in that definition either.
https://review.openstack.org/466773

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-22 Thread Sean Dague
On 05/15/2017 07:16 AM, Sean Dague wrote:
> We had a forum session in Boston on Postgresql and out of that agreed to
> the following steps forward:
> 
> 1. explicitly warn in operator facing documentation that Postgresql is
> less supported than MySQL. This was deemed better than just removing
> documentation, because when people see Postgresql files in tree they'll
> make assumptions (at least one set of operators did).
> 
> 2. Suse is in process of investigating migration from PG to Gallera for
> future versions of their OpenStack product. They'll make their findings
> and tooling open to help determine how burdensome this kind of
> transition would be for folks.
> 
> After those findings, we can come back with any next steps (or just
> leave it as good enough there).
> 
> The TC governance patch is updated here -
> https://review.openstack.org/#/c/427880/ - or if there are other
> discussion questions feel free to respond to this thread.

I've ended up in a number of conversations publicly and privately over
the last week. I'm trying to figure out how we best capture and
acknowledge the concerns.

My top concerns remain:

A1) Do not surprise users late by them only finding out they are on the
less traveled once they are so deeply committed there is no turning
back. It's fine for users to choose that path as long as they are
informed that they are going to need to be more self reliant.

A2) Do not prevent features like zero downtime keystone making forward
progress with a MySQL only solution. There will always be a way to
handle these things with a change window, but the non change window
version really does need more understanding of what the db is doing.

There are some orthogonal concerns

B1) PG was chosen by people in the past, maybe more than we realized,
that's real users that we don't want to throw under a bus. Whole sale
delete is off the table. Even what deprecation might mean is hard to
figure out given that there is "no clear path off", "missing data of
who's on it", and potentially creative solutions using it that people
would like (the Cockroach db question, though given some of the Galera
fixes that have had to go in, these things are never drop in replacements).

B2) The upstream code isn't so irreparably changed (e.g. delete the SQLA
layer) that it's not possible to have alternative DB backends
(especially as people might want to experiement with different
approaches in the future).


I think these are actually compatible concerns. The current proposal to
me actually tries to address A1 & B1, with a hint about why A2 is
valuable and we would want to do that.

It feels like there would be a valuable follow on in which A2 & B2 were
addressed which is basically "progressive enhancements can be allowed to
only work with MySQL based backends". Which is the bit that Monty has
been pushing for in other threads.

This feels like what a Tier 2 support looks like. A basic SQLA and pray
so that if you live behind SQLA you are probably fine (though not
tested), and then test and advanced feature roll out on a single
platform. Any of that work might port to other platforms over time, but
we don't want to make that table stakes for enhancements.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuxi][stackube][kuryr] IRC meeting

2017-05-22 Thread Hongbin Lu
Hi all,

We will have an IRC meeting at UTC 1400-1500 Tuesday (2017-05-23). At the 
meeting, we will discuss the k8s storage integration with OpenStack. This 
effort might cross more than one teams (i.e. kuryr and stackube). You are more 
than welcomed to join us at #openstack-meeting-cp tomorrow.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Projects holding back requirements updates

2017-05-22 Thread Matthew Thode
This is just a friendly reminder that without a release from master, if
your project still caps a requirement then it may be either holding back
a package or not be included in upper-constraints updates.

http://logs.openstack.org/76/466476/5/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/d71b8f6/console.html#_2017-05-22_09_38_53_142301

http://logs.openstack.org/76/466476/4/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/27cb570/console.html#_2017-05-21_09_00_14_624413

The following packages are taken from those lists and holding back
requirements updates.

Holding back updates from upstream to be consumed by other openstack
projects:

mistral - holding back sqlalchemy (no release that uncaps sqlalchemy)
django-openstack-auth - holding back django

Being held back from being updated and being consumed, all these is
because they don't have a release that uncaps pbr.

mistral
os-apply-config
os-collect-config
os-refresh-config
os-vif

If these projects could make a release (something fetchable by pypi) off
of master that's all that's needed.  They all have updated their
requirements in master, just no consumable release.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-22 Thread Attila Fazekas
In order to twist things even more ;-),
We should consider making tempest working in environment where the users
instead of getting IPV4 floating IP, they are allowed to get a globally
route-able
IPV6 range (prefix/ subnet from a subnetpool) .

Tempest should be able to do connectivity tests against vms,
hosted in these subnets.

This should work regardless to the test account usage,
and it's likely requires some extra tweak in our devstack environments as
well.

Best Regards,
Attila

On Mon, May 22, 2017 at 3:22 PM, Andrea Frittoli 
wrote:

> Hi Hongbin,
>
> If several of your test cases require a subnet pool, I think the simplest
> solution would be creating one in the resource creation step of the tests.
> As I understand it, subnet pools can be created by regular projects (they
> do not require admin credentials).
>
> The main advantage that I can think of for having subnet pools provisioned
> as part of the credential provider code is that - in case of
> pre-provisioned credentials - the subnet pool would be created and delete
> once per test user as opposed to once per test class.
>
> That said I'm not opposed to the proposal in general, but if possible I
> would prefer to avoid adding complexity to an already complex part of the
> code.
>
> andrea
>
> On Sun, May 21, 2017 at 2:54 AM Hongbin Lu  wrote:
>
>> Hi QA team,
>>
>>
>>
>> I have a proposal to create subnetpool/subnet pair on dynamic
>> credentials: https://review.openstack.org/#/c/466440/ . We (Zun team)
>> have use cases for using subnets with subnetpools. I wanted to get some
>> early feedback on this proposal. Will this proposal be accepted? If not,
>> would appreciate alternative suggestion if any. Thanks in advance.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Alex Schultz
On Mon, May 22, 2017 at 9:05 AM, Paul Belanger  wrote:
> On Mon, May 22, 2017 at 10:53:32AM -0400, Doug Hellmann wrote:
>> Excerpts from jenkins's message of 2017-05-22 10:49:09 +:
>> > Build failed.
>> >
>> > - puppet-nova-tarball 
>> > http://logs.openstack.org/89/89c58e7958b448364cb0290c1879116f49749a68/release/puppet-nova-tarball/fe9daf7/
>> >  : FAILURE in 55s
>> > - puppet-nova-tarball-signing puppet-nova-tarball-signing : SKIPPED
>> > - puppet-nova-announce-release puppet-nova-announce-release : SKIPPED
>> >
>>
>> The most recent puppet-nova release (newton 9.5.1) failed because
>> puppet isn't installed on the tarball building node. I know that
>> node configurations just changed recently to drop puppet, but I
>> don't know what needs to be done to fix the issue for this particular
>> job. It does seem to be running bindep, so maybe we just need to
>> include puppet there?  I could use some advice & help.
>>
> We need to sync 461970[1] across all modules, I've been meaning to do this 
> but will
> result in some gerrit spam. If a puppet core already has it setup, maybe they
> could do it.
>

We already did that and it doesn't solve this problem because we
didn't add *puppet* to the bindep.  We specifically don't want to do
that because we don't necessarily want the distro provided puppet used
(it may be older than what is supported).

> I was going to bring the puppet proposal patch[2] back online to avoid 
> manually
> doing this.
>

We probably should get that going, but we need to make sure we are
properly doing the modulesync config for all modules (hint: we
aren't).  I ran into issues with the latest version of modulesync and
that also needs to be investigated.

Thanks,
-Alex

> [1] https://review.openstack.org/#/c/461970/
> [2] https://review.openstack.org/#/c/211744/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Alex Schultz
On Mon, May 22, 2017 at 8:53 AM, Doug Hellmann  wrote:
> Excerpts from jenkins's message of 2017-05-22 10:49:09 +:
>> Build failed.
>>
>> - puppet-nova-tarball 
>> http://logs.openstack.org/89/89c58e7958b448364cb0290c1879116f49749a68/release/puppet-nova-tarball/fe9daf7/
>>  : FAILURE in 55s
>> - puppet-nova-tarball-signing puppet-nova-tarball-signing : SKIPPED
>> - puppet-nova-announce-release puppet-nova-announce-release : SKIPPED
>>
>
> The most recent puppet-nova release (newton 9.5.1) failed because
> puppet isn't installed on the tarball building node. I know that
> node configurations just changed recently to drop puppet, but I
> don't know what needs to be done to fix the issue for this particular
> job. It does seem to be running bindep, so maybe we just need to
> include puppet there?  I could use some advice & help.
>

We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder.  Perhaps the job needs that added to it

https://review.openstack.org/#/c/465156/

Thanks,
-Alex

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Paul Belanger
On Mon, May 22, 2017 at 10:53:32AM -0400, Doug Hellmann wrote:
> Excerpts from jenkins's message of 2017-05-22 10:49:09 +:
> > Build failed.
> > 
> > - puppet-nova-tarball 
> > http://logs.openstack.org/89/89c58e7958b448364cb0290c1879116f49749a68/release/puppet-nova-tarball/fe9daf7/
> >  : FAILURE in 55s
> > - puppet-nova-tarball-signing puppet-nova-tarball-signing : SKIPPED
> > - puppet-nova-announce-release puppet-nova-announce-release : SKIPPED
> > 
> 
> The most recent puppet-nova release (newton 9.5.1) failed because
> puppet isn't installed on the tarball building node. I know that
> node configurations just changed recently to drop puppet, but I
> don't know what needs to be done to fix the issue for this particular
> job. It does seem to be running bindep, so maybe we just need to
> include puppet there?  I could use some advice & help.
> 
We need to sync 461970[1] across all modules, I've been meaning to do this but 
will
result in some gerrit spam. If a puppet core already has it setup, maybe they
could do it.

I was going to bring the puppet proposal patch[2] back online to avoid manually
doing this.

[1] https://review.openstack.org/#/c/461970/
[2] https://review.openstack.org/#/c/211744/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-05-22 16:54:30 +0200:
> On 05/22/2017 04:09 PM, Doug Hellmann wrote:
> > Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
> >> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
> >>> Hi everyone,
> >>>
> >>> The documentation team are rapidly losing key contributors and core 
> >>> reviewers.
> >>> We are not alone, this is happening across the board. It is making things
> >>> harder, but not impossible.
> >>>
> >>> Since our inception in 2010, we’ve been climbing higher and higher trying 
> >>> to
> >>> achieve the best documentation we could, and uphold our high standards. 
> >>> This is
> >>> something to be incredibly proud of.
> >>>
> >>> However, we now need to take a step back and realise that the amount of 
> >>> work we
> >>> are attempting to maintain is now out of reach for the team size that we 
> >>> have.
> >>> At the moment we have 13 cores, of whom none are full time contributors or
> >>> reviewers. This includes myself.
> >>>
> >>> Until this point, the documentation team has owned several manuals that 
> >>> include
> >>> content related to multiple projects, including an installation guide, 
> >>> admin
> >>> guide, configuration guide, networking guide, and security guide. Because 
> >>> the
> >>> team no longer has the resources to own that content, we want to invert 
> >>> the
> >>> relationship between the doc team and project teams, so that we become 
> >>> liaisons
> >>> to help with maintenance instead of asking for project teams to provide 
> >>> liaisons
> >>> to help with content. As a part of that change, we plan to move the 
> >>> existing
> >>> content out of the central manuals repository, into repositories owned by 
> >>> the
> >>> appropriate project teams. Project teams will then own the content and the
> >>> documentation team will assist by managing the build tools, helping with 
> >>> writing
> >>> guidelines and style, but not writing the bulk of the text.
> >>>
> >>> We currently have the infrastructure set up to empower project teams to 
> >>> manage
> >>> their own documentation in their own tree, and many do. As part of this 
> >>> change,
> >>> the rest of the existing content from the install guide and admin guide 
> >>> will
> >>> also move into project-owned repositories. We have a few options for how 
> >>> to
> >>> implement the move, and that's where we need feedback now.
> >>>
> >>> 1. We could combine all of the documentation builds, so that each project 
> >>> has a
> >>> single doc/source directory that includes developer, contributor, and user
> >>> documentation. This option would reduce the number of build jobs we have 
> >>> to run,
> >>> and cut down on the number of separate sphinx configurations in each 
> >>> repository.
> >>> It would completely change the way we publish the results, though, and we 
> >>> would
> >>> need to set up redirects from all of the existing locations to the new
> >>> locations and move all of the existing documentation under the new 
> >>> structure.
> >>>
> >>> 2. We could retain the existing trees for developer and API docs, and add 
> >>> a new
> >>> one for "user" documentation. The installation guide, configuration 
> >>> guide, and
> >>> admin guide would move here for all projects. Neutron's user 
> >>> documentation would
> >>> include the current networking guide as well. This option would add 1 new 
> >>> build
> >>> to each repository, but would allow us to easily roll out the change with 
> >>> less
> >>> disruption in the way the site is organized and published, so there would 
> >>> be
> >>> less work in the short term.
> >>>
> >>> 3. We could do option 2, but use a separate repository for the new 
> >>> user-oriented
> >>> documentation. This would allow project teams to delegate management of 
> >>> the
> >>> documentation to a separate review project-sub-team, but would complicate 
> >>> the
> >>> process of landing code and documentation updates together so that the 
> >>> docs are
> >>> always up to date.
> >>>
> >>> Personally, I think option 2 or 3 are more realistic, for now. It does 
> >>> mean
> >>> that an extra build would have to be maintained, but it retains that key
> >>> differentiator between what is user and developer documentation and 
> >>> involves
> >>> fewer changes to existing published contents and build jobs. I definitely 
> >>> think
> >>> option 1 is feasible, and would be happy to make it work if the community
> >>> prefers this. We could also view option 1 as the longer-term goal, and 
> >>> option 2
> >>> as an incremental step toward it (option 3 would make option 1 more 
> >>> complicated
> >>> to achieve).
> >>>
> >>> What does everyone think of the proposed options? Questions? Other 
> >>> thoughts?
> >>
> >> We're already hosting install-guide and api-ref in our tree, and I'd 
> >> prefer we
> >> don't change it, as it's going to be annoying (especially wrt backports). 
> >> I'd
> >> prefer we create 

[openstack-dev] [puppet] Meeting Reminder for May 23, 2017

2017-05-22 Thread Alex Schultz
Hey Folks,

Just a reminder that we have a meeting schedule for tomorrow, May 23,
2017 @ 1500 UTC.  If you wish to talk about something, please add it
to the agenda[0].

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170523

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [dib] RFC: moving/transitioning the ironic-agent element to the ironic-python-agent tree

2017-05-22 Thread Dmitry Tantsur

On 05/22/2017 03:10 PM, Sam Betts (sambetts) wrote:

I would like to suggest that we create a new repo for housing the tools 
required to build ironic python agent images: 
ironic-python-agent-builder(tooling). This would include, the DIB element, the 
existing coreos and tinyipa methods and hopefully in the future the buildroot 
method for creating IPA images.


+1, I like this one as well.



The reason I propose a separation of tooling and IPA itself is that the tooling 
is mostly detached from which version of IPA is being built into the image, and 
often when we make changes to the tooling that change should be included in 
images built for all versions of IPA which involves us having to backport these 
changes to all currently maintained versions of IPA.

Hopefully having this as a separate repo will also simplify packaging for 
distros as they won’t need to include IPA itself with the tooling to build it.

I’m happy with the name ironic-python-agent for the element, I think that is 
more intuitive anyway.

An RFE or multiple might be useful for tracking this work.


Ok, will create after today's meeting (I submitted this thread as a topic 
there).



Sam

On 22/05/2017, 13:40, "Dmitry Tantsur"  wrote:

 Hi all!
 
 Some time ago we discussed moving ironic-agent element that is used to build IPA

 to IPA tree itself. It got stuck, and I'd like to restart the discussion.
 
 The reason for this move is to make the DIB element in question one of

 *official* ways to build IPA. This includes gating on both IPA and the 
element
 changes, which we currently don't do.
 
 The primary concern IIRC was elements name clash. We can solve it by just

 renaming the element. The new one will be called "ironic-python-agent".
 
  From the packaging perspective, we'll create a new subpackage

 openstack-ironic-python-agent-elements (the RDO name, may differ for other
 distribution) that will only ship /usr/share/ironic-python-agent-elements 
with
 the ironic-python-agent element within it. To pick the new element, the
 consumers will have to add /usr/share/ironic-python-agent-elements to the
 ELEMENTS_PATH, and change the element name from ironic-agent to 
ironic-python-agent.
 
 Please let me know what you think about the approach. If there are no objects,

 I'll work on this move in the coming weeks.
 
 P.S.

 Do we need an Ironic RFE for that?
 
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Dmitry Tantsur

On 05/22/2017 04:09 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:

On 05/22/2017 11:39 AM, Alexandra Settle wrote:

Hi everyone,

The documentation team are rapidly losing key contributors and core reviewers.
We are not alone, this is happening across the board. It is making things
harder, but not impossible.

Since our inception in 2010, we’ve been climbing higher and higher trying to
achieve the best documentation we could, and uphold our high standards. This is
something to be incredibly proud of.

However, we now need to take a step back and realise that the amount of work we
are attempting to maintain is now out of reach for the team size that we have.
At the moment we have 13 cores, of whom none are full time contributors or
reviewers. This includes myself.

Until this point, the documentation team has owned several manuals that include
content related to multiple projects, including an installation guide, admin
guide, configuration guide, networking guide, and security guide. Because the
team no longer has the resources to own that content, we want to invert the
relationship between the doc team and project teams, so that we become liaisons
to help with maintenance instead of asking for project teams to provide liaisons
to help with content. As a part of that change, we plan to move the existing
content out of the central manuals repository, into repositories owned by the
appropriate project teams. Project teams will then own the content and the
documentation team will assist by managing the build tools, helping with writing
guidelines and style, but not writing the bulk of the text.

We currently have the infrastructure set up to empower project teams to manage
their own documentation in their own tree, and many do. As part of this change,
the rest of the existing content from the install guide and admin guide will
also move into project-owned repositories. We have a few options for how to
implement the move, and that's where we need feedback now.

1. We could combine all of the documentation builds, so that each project has a
single doc/source directory that includes developer, contributor, and user
documentation. This option would reduce the number of build jobs we have to run,
and cut down on the number of separate sphinx configurations in each repository.
It would completely change the way we publish the results, though, and we would
need to set up redirects from all of the existing locations to the new
locations and move all of the existing documentation under the new structure.

2. We could retain the existing trees for developer and API docs, and add a new
one for "user" documentation. The installation guide, configuration guide, and
admin guide would move here for all projects. Neutron's user documentation would
include the current networking guide as well. This option would add 1 new build
to each repository, but would allow us to easily roll out the change with less
disruption in the way the site is organized and published, so there would be
less work in the short term.

3. We could do option 2, but use a separate repository for the new user-oriented
documentation. This would allow project teams to delegate management of the
documentation to a separate review project-sub-team, but would complicate the
process of landing code and documentation updates together so that the docs are
always up to date.

Personally, I think option 2 or 3 are more realistic, for now. It does mean
that an extra build would have to be maintained, but it retains that key
differentiator between what is user and developer documentation and involves
fewer changes to existing published contents and build jobs. I definitely think
option 1 is feasible, and would be happy to make it work if the community
prefers this. We could also view option 1 as the longer-term goal, and option 2
as an incremental step toward it (option 3 would make option 1 more complicated
to achieve).

What does everyone think of the proposed options? Questions? Other thoughts?


We're already hosting install-guide and api-ref in our tree, and I'd prefer we
don't change it, as it's going to be annoying (especially wrt backports). I'd
prefer we create user-guide directory in projects, and move the user guide 
there.


Handling backports with a merged guide is an issue we didn't come
up with in our earlier discussions. How often do you backport doc
changes in practice? Do you foresee merge conflicts caused by issues
other than the files being renamed?


When we created our in-tree install-guide, we backported the whole of it to the 
previous release :)


But mostly, I expect that if a bug fix requires a change to install-guide (the 
bug is in install-guide itself, or we need to document a know issue, or 
anything), it has to be backportable.


Files being renamed can already be an issue (sometimes git suddenly chokes on 
them). Anyway, currently we're in process of refactoring our install-guide, so 

[openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-05-22 10:49:09 +:
> Build failed.
> 
> - puppet-nova-tarball 
> http://logs.openstack.org/89/89c58e7958b448364cb0290c1879116f49749a68/release/puppet-nova-tarball/fe9daf7/
>  : FAILURE in 55s
> - puppet-nova-tarball-signing puppet-nova-tarball-signing : SKIPPED
> - puppet-nova-announce-release puppet-nova-announce-release : SKIPPED
> 

The most recent puppet-nova release (newton 9.5.1) failed because
puppet isn't installed on the tarball building node. I know that
node configurations just changed recently to drop puppet, but I
don't know what needs to be done to fix the issue for this particular
job. It does seem to be running bindep, so maybe we just need to
include puppet there?  I could use some advice & help.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Anne Gentle's message of 2017-05-22 08:08:40 -0500:
> On Mon, May 22, 2017 at 4:39 AM, Alexandra Settle 
> wrote:
> 
> > Hi everyone,
> >
> >
> >
> > The documentation team are rapidly losing key contributors and core
> > reviewers. We are not alone, this is happening across the board. It is
> > making things harder, but not impossible.
> >
> > Since our inception in 2010, we’ve been climbing higher and higher trying
> > to achieve the best documentation we could, and uphold our high standards.
> > This is something to be incredibly proud of.
> >
> >
> >
> > However, we now need to take a step back and realise that the amount of
> > work we are attempting to maintain is now out of reach for the team size
> > that we have. At the moment we have 13 cores, of whom none are full time
> > contributors or reviewers. This includes myself.
> >
> 
> One point I'd like to emphasize with this proposal, any way we go, is that
> we would prefer that the writing tasks not always fall on the devs, but
> that there can be dedicated writers or ops or end-users attending to info
> needs, it's just that they'll do the work in the repos.

I'm not sure we can assume that will be the case. If we have writers,
obviously we want their help here. But if we have no dedicated writers,
we need project teams to take more responsibility for the docs for what
they produce.

> Also, I'm working on a patch to try to quantify the best practices using
> our current data: https://review.openstack.org/#/c/461280/ We may discover
> some ways to work that mean gaining efficiencies and ensuring quality.
> Project teams should consider changes to reviewers and so on to try to be
> inclusive of the varied types of work in their repo.
> 
> I'll emphasize that we need to be extremely protective of the user space
> with this sort of move. No one who reads the docs ultimately cares about
> how they are put together. They just want to find what they need and get on
> with their lives.

For me, this is another point in favor of option 2, which involves
the least amount of disruption to existing publishing jobs (affecting
contributors) and locations (affecting consumers).  Once we transfer
ownership and have the builds working, we can discuss more significant
changes.

> > Until this point, the documentation team has owned several manuals that
> > include content related to multiple projects, including an installation
> > guide, admin guide, configuration guide, networking guide, and security
> > guide. Because the team no longer has the resources to own that content, we
> > want to invert the relationship between the doc team and project teams, so
> > that we become liaisons to help with maintenance instead of asking for
> > project teams to provide liaisons to help with content. As a part of that
> > change, we plan to move the existing content out of the central manuals
> > repository, into repositories owned by the appropriate project teams.
> > Project teams will then own the content and the documentation team will
> > assist by managing the build tools, helping with writing guidelines
> > and style, but not writing the bulk of the text.
> >
> >
> >
> > We currently have the infrastructure set up to empower project teams to
> > manage their own documentation in their own tree, and many do. As part of
> > this change, the rest of the existing content from the install guide and
> > admin guide will also move into project-owned repositories. We have a few
> > options for how to implement the move, and that's where we need feedback
> > now.
> >
> >
> >
> > 1. We could combine all of the documentation builds, so that each project
> > has a single doc/source directory that includes developer, contributor, and
> > user documentation. This option would reduce the number of build jobs we
> > have to run, and cut down on the number of separate sphinx configurations
> > in each repository. It would completely change the way we publish the
> > results, though, and we would need to set up redirects from all of the
> > existing locations to the new locations and move all of the existing
> > documentation under the new structure.
> >
> 
> I'd love to try this one. I know this is what John Dickenson has tried for
> the swift project with https://review.openstack.org/#/c/386834/ but since
> it didn't match anyone else, and I haven't heard back yet about the user
> experience, we didn't pursue much.
> 
> I'll still be pretty adamant about the user experience, so that the project
> name does not spill over into the user space. Redirects will be crucial as
> someone pointed out in one of the recent etherpads. Also, it may require
> not publishing api-ref info to developer.openstack.org (in other words, one
> job means one target for publication right now).
> 
> >
> >
> > 2. We could retain the existing trees for developer and API docs, and add
> > a new one for "user" documentation. The installation guide, configuration
> > guide, 

[openstack-dev] [all] Unified Limits (Boston Forum Session and Next Steps)

2017-05-22 Thread Sean Dague
The Unified Limits session in Boston was interesting and fruitful.
Here's the current summary of where I think we stand.

* Current Status

** Conceptual Spec Landed


https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/unified-limits.html

** Keystone detailed interface spec - UP FOR REVIEW

   https://review.openstack.org/#/c/455709/
   Definitely needs lots of looking at, this is very fresh and raw

** Quota Models doc - UP FOR DISCUSSION

   https://review.openstack.org/#/c/441203/ - This is far less than
   review, but has been useful to drive discussion. Need some user
   contributed models to make further progress.

* Boston Forum
  https://etherpad.openstack.org/p/BOS-forum-quotas

  The Boston Forum sessions was a double length session. The first
  half was spent going through the basics of the conceptual spec, then
  a lot of talk about models.

  Some key points of understanding.

  - Limits amounts are only integers at this point. If you want to
work in percentages, that's the role of an external tool to
rebalance your limits in your cluster.
  - Strict validation means that keystone will only store limits for
services keystone understands
  - None of this is rate limiting, it's absolute usage, not usage over
time.
  - User limits should go away, everything really needs to be project
level owned.

  The models conversation was interesting, and goes about how it
  always goes. It turns out these are complicated enough algorithms
  that you can't do them in your head, even though people think they
  can. Especially the moment you require the example to work in N>=3
  levels of hierarchy. The pre-made diagrams helped push the
  conversation a bit, and a couple of ad-hoc examples helped as well.

  It does mean we need to prepare for "why can't you just", by having
  a template and some standard pathological use cases that every
  example runs through. A quota models worksheet.

* Next Steps

  Most of these come out of the Boston Forum (notes:
  https://etherpad.openstack.org/p/BOS-forum-quotas )


** TODO CERN ideal ruleset

   Get someone from the CERN team to write down what their ideal
   ruleset looks like, so it can be turned into a worked example

** TODO Chet ruleset

   Get Chet to write down his ideal ruleset here, so it can be turned
   into a worked example.

** TODO Jay's usage computing algorithm

   Jay suggested there is algorithm that doesn't need to take into
   account the tree structure when computing usage. Get the algorithm,
   "unit test" with examples to validate whether that is viable or not.

* Unresolved Issues

** Who is going to work on Unified Limits?

   There is a ton of interest in getting this work done, recent
   organizational priority changes and cuts mean this entire effort is
   pretty high risk of failure largely due to lack of people having
   time to work on it.

** Can we actually put together an interface in Keystone without having
a complex model worked out?

   Good point from Colleen. As we go through these different examples
   (Flatland, and various hierarchies) it does seem like every one
   might need different details to compute usage correctly. Perhaps
   the best we can get is tagging the return datastructures with the
   model name so that we can understand a datastructure is only good
   enough to describe a single model.

** How much load will this add to Keystone?

   This moves a set of API calls from services back to Keystone, and
   is looking to enable more complicated functions, which will mean
   more work. How does this change Keystone scaling?

   NOTE: this means some early optimizations are going to need to be
   put into place to address load concerns.

** Templated Catalog vs. Strict validation of service types

   Nectar and others don't add to keystone catalog, they use a
   templated catalog. That effectively means that Keystone doesn't
   actually know anything about service types. :(

** When we talk about project root, how far up are we talking?

   In my mind a hierarchical structure isn't going to be single
   rooted. However, I think the conversation about the magic domain 0
   made it clear that we may need some mechanism where a project is
   annotated to be a root (and thus ignores usage further up). This
   remains one of those areas where I think the Inside Keystone
   Internals vs. Outside view is still hitting communication gaps.

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
> > Hi everyone,
> > 
> > The documentation team are rapidly losing key contributors and core 
> > reviewers. 
> > We are not alone, this is happening across the board. It is making things 
> > harder, but not impossible.
> > 
> > Since our inception in 2010, we’ve been climbing higher and higher trying 
> > to 
> > achieve the best documentation we could, and uphold our high standards. 
> > This is 
> > something to be incredibly proud of.
> > 
> > However, we now need to take a step back and realise that the amount of 
> > work we 
> > are attempting to maintain is now out of reach for the team size that we 
> > have. 
> > At the moment we have 13 cores, of whom none are full time contributors or 
> > reviewers. This includes myself.
> > 
> > Until this point, the documentation team has owned several manuals that 
> > include 
> > content related to multiple projects, including an installation guide, 
> > admin 
> > guide, configuration guide, networking guide, and security guide. Because 
> > the 
> > team no longer has the resources to own that content, we want to invert the 
> > relationship between the doc team and project teams, so that we become 
> > liaisons 
> > to help with maintenance instead of asking for project teams to provide 
> > liaisons 
> > to help with content. As a part of that change, we plan to move the 
> > existing 
> > content out of the central manuals repository, into repositories owned by 
> > the 
> > appropriate project teams. Project teams will then own the content and the 
> > documentation team will assist by managing the build tools, helping with 
> > writing 
> > guidelines and style, but not writing the bulk of the text.
> > 
> > We currently have the infrastructure set up to empower project teams to 
> > manage 
> > their own documentation in their own tree, and many do. As part of this 
> > change, 
> > the rest of the existing content from the install guide and admin guide 
> > will 
> > also move into project-owned repositories. We have a few options for how to 
> > implement the move, and that's where we need feedback now.
> > 
> > 1. We could combine all of the documentation builds, so that each project 
> > has a 
> > single doc/source directory that includes developer, contributor, and user 
> > documentation. This option would reduce the number of build jobs we have to 
> > run, 
> > and cut down on the number of separate sphinx configurations in each 
> > repository. 
> > It would completely change the way we publish the results, though, and we 
> > would 
> > need to set up redirects from all of the existing locations to the new 
> > locations and move all of the existing documentation under the new 
> > structure.
> > 
> > 2. We could retain the existing trees for developer and API docs, and add a 
> > new 
> > one for "user" documentation. The installation guide, configuration guide, 
> > and 
> > admin guide would move here for all projects. Neutron's user documentation 
> > would 
> > include the current networking guide as well. This option would add 1 new 
> > build 
> > to each repository, but would allow us to easily roll out the change with 
> > less 
> > disruption in the way the site is organized and published, so there would 
> > be 
> > less work in the short term.
> > 
> > 3. We could do option 2, but use a separate repository for the new 
> > user-oriented 
> > documentation. This would allow project teams to delegate management of the 
> > documentation to a separate review project-sub-team, but would complicate 
> > the 
> > process of landing code and documentation updates together so that the docs 
> > are 
> > always up to date.
> > 
> > Personally, I think option 2 or 3 are more realistic, for now. It does mean 
> > that an extra build would have to be maintained, but it retains that key 
> > differentiator between what is user and developer documentation and 
> > involves 
> > fewer changes to existing published contents and build jobs. I definitely 
> > think 
> > option 1 is feasible, and would be happy to make it work if the community 
> > prefers this. We could also view option 1 as the longer-term goal, and 
> > option 2 
> > as an incremental step toward it (option 3 would make option 1 more 
> > complicated 
> > to achieve).
> > 
> > What does everyone think of the proposed options? Questions? Other thoughts?
> 
> We're already hosting install-guide and api-ref in our tree, and I'd prefer 
> we 
> don't change it, as it's going to be annoying (especially wrt backports). I'd 
> prefer we create user-guide directory in projects, and move the user guide 
> there.

Handling backports with a merged guide is an issue we didn't come
up with in our earlier discussions. How often do you backport doc
changes in practice? Do you foresee merge conflicts caused by issues
other than the files being renamed?

Doug


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-22 Thread Mike Bayer



On 05/22/2017 05:02 AM, Thierry Carrez wrote:

Mike Bayer wrote:

On 05/18/2017 06:13 PM, Adrian Turjak wrote:


So, specifically in the realm of Keystone, since we are using sqlalchemy
we already have Postgresql support, and since Cockroachdb does talk
Postgres it shouldn't be too hard to back Keystone with it. At that
stage you have a Keystone DB that could be multi-region, multi-master,
consistent, and mostly impervious to disaster. Is that not the holy
grail for a service like Keystone? Combine that with fernet tokens and
suddenly Keystone becomes a service you can't really kill, and can
mostly forget about.


So this is exhibit A for why I think keeping some level of "this might
need to work on other databases" within a codebase is always a great
idea even if you are not actively supporting other DBs at the moment.
Even if Openstack dumped Postgresql completely, I'd not take the
rudimental PG-related utilities out of oslo.db nor would I rename all
the "mysql_XYZ" facilities to be "XYZ".
[...]

Yes, that sounds like another reason why we'd not want to aggressively
contract to the MySQL family of databases. At the very least, before we
do that, we should experiment with CockroachDB and see how reasonable it
would be to use in an OpenStack context. It might (might) hit a sweet
spot between performance, durability, database decentralization and
keeping SQL advanced features -- I'd hate it if we discovered that too late.


there's a difference between "architecting for pluggability" and 
"supporting database X, Y, and Z".   I only maintain we should keep the 
notion of pluggability around.  This doesn't mean you can't use MySQL 
specific features, it only means, anytime you're using a MySQL feature, 
it's in the context of a unit of code that would be swapped out when a 
different database backend were to be implemented.   The vast majority 
of our database code is like this already, mostly implicitly due to 
SQLAlchemy and in other cases explicitly as we see in a lot of the 
migration scripts.


I think the existence of the PG backend, combined with the immediate 
interest in getting NDB to work, and now Cockroach DB, not to mention 
that there are two major MySQL variants (MySQL, MariaDB) which do have 
significant differences (the JSON type one of the biggest examples) 
should suggest that any modern database-enabled application can't really 
afford to completely hardcode to a single database backend without at 
least basic layers of abstraction being present.	






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Boston Forum session recap - searchlight integration

2017-05-22 Thread Mike Bayer



On 05/22/2017 05:39 AM, Matthew Booth wrote:


There are also a couple of optimisations to make which I won't bother 
with up front. Dan suggested in his CellsV2 talk that we would only 
query cells where the user actually has instances. If we find users tend 
to clump in a small number of cells this would be a significant 
optimisation, although the overhead on the api node for a query 
returning no rows is probably very little. Also, I think you mentioned 
that there's an option to tell SQLA not to batch-process rows, but that 
it is less efficient for total throughput? I suspect there would be a 
point at which we'd want that. 


it's the yield_per() option and I think you should use it up front, just 
so it's there and we can hit any issues it might cause (shouldn't be any 
provided no eager loading is used).  Have it yield on about 5 rows at a 
time.  The pymysql driver these days I think does not actually buffer 
the rows but 50 is very little anyway.





If there's a reasonable way to calculate

a tipping point, that might give us some additional life.

Bear in mind that the principal advantages to not using Searchlight are:

* It is simpler to implement
* It is simpler to manage
* It will return accurate results

Following the principal of 'as simple as possible, but no simpler', I 
think there's enormous benefit to this much simpler approach for anybody 
who doesn't need a more complex approach. However, while it reduces the 
urgency of something like the Searchlight solution, I expect there are 
going to be deployments which need that.



More over, during the query there are instances operation(
create, delete)  in parallel during the pagination/sort query,
there is situation some cells may not provide response in time,
or network connection broken, etc, many abnormal cases may
happen. How to deal with some of cells abnormal query response
is also one great factor to be considered.


Aside: For a query operation, what's the better user experience when a 
single cell is failing:


1. The whole query fails.
2. The user gets incomplete results.

Either of these are simple to implement. Incomplete results would also 
additionally be logged as an ERROR, but I can't think of any way to also 
return to the user that there's a problem with the data we returned 
without throwing an error.


Thoughts?

Matt


It's not good idea to support pagination and sort at the same
time (may not provide exactly the result end user want) if
searchlight should not be integrated.

In fact in Tricircle, when query ports from neutron where
tricircle central plugin is installed, the tricircle central
plugin do the similar cross local Neutron ports query, and not
support pagination/sort together.

Best Regards
Chaoyi Huang (joehuang)


From: Matt Riedemann [mriede...@gmail.com
]
Sent: 19 May 2017 5:21
To: openstack-dev@lists.openstack.org

Subject: [openstack-dev] [nova] Boston Forum session recap -
searchlightintegration

Hi everyone,

After previous summits where we had vertical tracks for Nova
sessions I
would provide a recap for each session.

The Forum in Boston was a bit different, so here I'm only
attempting to
recap the Forum sessions that I ran. Dan Smith led a session on
Cells
v2, John Garbutt led several sessions on the VM and Baremetal
platform
concept, and Sean Dague led sessions on hierarchical quotas and API
microversions, and I'm going to leave recaps for those sessions
to them.

I'll do these one at a time in separate emails.


Using Searchlight to list instances across cells in nova-api


The etherpad for this session is here [1]. The goal for this
session was
to explain the problem and proposed plan from the spec [2] to the
operators in the room and get feedback.

Polling the room we found that not many people are deploying
Searchlight
but most everyone was using ElasticSearch.

An immediate concern that came up was the complexity involved with
integrating Searchlight, especially around issues with latency
for state
changes and questioning how this does not redo the top-level
cells v1
sync issue. It admittedly does to an extent, but we don't have
all of
the weird side code paths with cells v1 and it should be
self-healing.
Kris Lindgren noted that the instance.usage.exists periodic
notification
from the computes hammers their notification bus; we suggested
he 

[openstack-dev] [tripleo] Feedback needed on sizing partitions

2017-05-22 Thread Yolanda Robla Mota
Hi
As part of the security hardened images effort:
https://blueprints.launchpad.net/tripleo/+spec/build-whole-disk-images

I created a patch to define the initial partitions on the image:
https://review.openstack.org/#/c/449122/7/elements/overcloud-secure/block-device-config.yaml

I tested those on a tripleo deployment for newton, and everything works
fine. But I really want feedback on the right sizing, specially for
production. So I'd like to get feedback from people working in tripleo, to
validate the sizes of partitions, and give advice on the right sizing there.

This work needs to land for Pike , so we don't have many time, I'd
appreciate collaboration. Thanks!

-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tempest] L2GW not standard Tempest Plugin. How to get tests into CI?

2017-05-22 Thread Chandan kumar
On Mon, May 22, 2017 at 6:41 PM, Ghanshyam Mann  wrote:
> On Mon, May 22, 2017 at 9:42 PM, Kevin Benton  wrote:
>> Can you file a patch to adjust tox.ini of l2gw to make it the same as the
>> others?
>
> Actually it is not just tox, if we need to setup l2gw tests as Tempest
> plugin, it needs some of the tests and config options refactoring. I
> have not gone deep into those tests but from first look it should be
> easy one.
>
>>
>> On May 22, 2017 7:35 AM, "Ricardo Noriega De Soto" 
>> wrote:
>>>
>>> Hello guys,
>>>
>>> I'm trying to enable some tempest tests into puppet-openstack-integration
>>> project. I basically did the same procedure as with other Neutron drivers
>>> but tests were not being executed:
>>>
>>> https://review.openstack.org/#/c/460080/
>>>
>>> If you check the puppet-tempest patch, I enable the "l2gw" driver in
>>> tempest.conf under the service_avaiblable section:
>>>
>>> https://review.openstack.org/#/c/459712/
>>>
>>> However, the way these tests are called slightly different:
>>>
>>>
>>> https://github.com/openstack/networking-l2gw/tree/master/networking_l2gw/tests
>>>
>>> https://github.com/openstack/networking-l2gw/blob/master/tox.ini#L50-L53
>
> Yes, as you mentioned l2gw tests are not setup as tempest plugin but
> that should not matter here. Test is being skipped because,
> 'l2-gateway' extension is not enabled on tempest config [1] in your
> patch.
>
> This tests depends on 2 conditions to run [2]
> 1. 'l2-gateway' extension to be enabled.
> 2. len(CONF.L2GW.l2gw_switch) < 0   This seems not to be in tempest conf [3].
>
> If you make these 2 options configured correctly then test should run.
> I was searching example of those config in openstack/networking-l2gw
> jobs but seems like those tests does not run there.  Anywhere we run
> those tests?
>
> Currently tests depends Tempest + some extra config options. This way
> makes l2gw tests hard to configure and run. To make it simple, I
> recommend to make l2gw tests as tempest plugin if they can be. It
> should be simple though. We have nice doc for setting up the Plugin
> [4], but if you need help QA team will be happy to help in that.
>

I have added a patch upstream which implements tempest plugin for
networking-l2gw : https://review.openstack.org/#/c/466728/
Feel free to take it forward.

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Boston Forum session recap - searchlight integration

2017-05-22 Thread Sean Dague
On 05/22/2017 05:39 AM, Matthew Booth wrote:
> Aside: For a query operation, what's the better user experience when a
> single cell is failing:
> 
> 1. The whole query fails.
> 2. The user gets incomplete results.
> 
> Either of these are simple to implement. Incomplete results would also
> additionally be logged as an ERROR, but I can't think of any way to also
> return to the user that there's a problem with the data we returned
> without throwing an error.

The rough plan of record was to abuse HTTP 206 as an indicator that
something is missing in the result set, and return best information we
can reconstruct from the top level database.

In the filtered case, that means some stuff might silently get dropped.
In the all_instances / paginated case, you would get everything for the
project_id of your token, just some returned servers would only have
server uuid.

We could also put a microversion in place so that something more
specific about server list status (all sources reported) was there.

No one expects a 500 error on server list, so we definitely don't want
to give that to people.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-22 Thread Andrea Frittoli
Hi Hongbin,

If several of your test cases require a subnet pool, I think the simplest
solution would be creating one in the resource creation step of the tests.
As I understand it, subnet pools can be created by regular projects (they
do not require admin credentials).

The main advantage that I can think of for having subnet pools provisioned
as part of the credential provider code is that - in case of
pre-provisioned credentials - the subnet pool would be created and delete
once per test user as opposed to once per test class.

That said I'm not opposed to the proposal in general, but if possible I
would prefer to avoid adding complexity to an already complex part of the
code.

andrea

On Sun, May 21, 2017 at 2:54 AM Hongbin Lu  wrote:

> Hi QA team,
>
>
>
> I have a proposal to create subnetpool/subnet pair on dynamic credentials:
> https://review.openstack.org/#/c/466440/ . We (Zun team) have use cases
> for using subnets with subnetpools. I wanted to get some early feedback on
> this proposal. Will this proposal be accepted? If not, would appreciate
> alternative suggestion if any. Thanks in advance.
>
>
>
> Best regards,
>
> Hongbin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-22 Thread Thierry Carrez
Thierry Carrez wrote:
> [...]
> I have POC code for this bot already. Before I publish it (and start
> work to make infra support it), I just wanted to see if this is the
> right direction and if I should continue to work on it :) I feel like
> it's an incremental improvement that preserves the flexibility and
> self-scheduling while addressing the main visibility concern. If you
> have better ideas, please let me know !

Thanks for the feedback ! Since the idea seems to have some support, I
updated and published the code at:

https://github.com/ttx/ptgbot

It's still pretty basic -- in particular it's missing all the code to
make it extract information from cells in ethercalc and seamlessly merge
that onto the rendered page. It also is pretty permissive about what is
a "room" and who can issue orders to it.

I'll work to push it in an OpenStack hosted repository, and then to be
autodeployed on OpenStack infrastructure.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tempest] L2GW not standard Tempest Plugin. How to get tests into CI?

2017-05-22 Thread Ghanshyam Mann
On Mon, May 22, 2017 at 9:42 PM, Kevin Benton  wrote:
> Can you file a patch to adjust tox.ini of l2gw to make it the same as the
> others?

Actually it is not just tox, if we need to setup l2gw tests as Tempest
plugin, it needs some of the tests and config options refactoring. I
have not gone deep into those tests but from first look it should be
easy one.

>
> On May 22, 2017 7:35 AM, "Ricardo Noriega De Soto" 
> wrote:
>>
>> Hello guys,
>>
>> I'm trying to enable some tempest tests into puppet-openstack-integration
>> project. I basically did the same procedure as with other Neutron drivers
>> but tests were not being executed:
>>
>> https://review.openstack.org/#/c/460080/
>>
>> If you check the puppet-tempest patch, I enable the "l2gw" driver in
>> tempest.conf under the service_avaiblable section:
>>
>> https://review.openstack.org/#/c/459712/
>>
>> However, the way these tests are called slightly different:
>>
>>
>> https://github.com/openstack/networking-l2gw/tree/master/networking_l2gw/tests
>>
>> https://github.com/openstack/networking-l2gw/blob/master/tox.ini#L50-L53

Yes, as you mentioned l2gw tests are not setup as tempest plugin but
that should not matter here. Test is being skipped because,
'l2-gateway' extension is not enabled on tempest config [1] in your
patch.

This tests depends on 2 conditions to run [2]
1. 'l2-gateway' extension to be enabled.
2. len(CONF.L2GW.l2gw_switch) < 0   This seems not to be in tempest conf [3].

If you make these 2 options configured correctly then test should run.
I was searching example of those config in openstack/networking-l2gw
jobs but seems like those tests does not run there.  Anywhere we run
those tests?

Currently tests depends Tempest + some extra config options. This way
makes l2gw tests hard to configure and run. To make it simple, I
recommend to make l2gw tests as tempest plugin if they can be. It
should be simple though. We have nice doc for setting up the Plugin
[4], but if you need help QA team will be happy to help in that.

>>
>> Is there any recommendation on how to approach this?? I don't think
>> setting environment variables in puppet-openstack-integration is acceptable.
>> I would love to get some advice around this.
>>
>> Thank you guys!!
>>
>> --
>> Ricardo Noriega
>>
>> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
>> Red Hat
>> irc: rnoriega @freenode
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

..[1]  
http://logs.openstack.org/80/460080/27/check/gate-puppet-openstack-integration-4-scenario004-tempest-centos-7/3b23503/logs/tempest.conf.txt.gz

..[2]  
https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/tests/api/test_l2gw_extensions.py#L55-L60

..[3]  
https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/tests/tempest/config.py

..[4] https://docs.openstack.org/developer/tempest/plugin.html

Thanks
gmann

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-22 Thread Alexandra Settle
Project: Documentation and I18N
Attendees: 3-5 (maybe?)
Etherpad: https://etherpad.openstack.org/p/doc-onboarding 

What we did:

We ran the session informally based off whoever was there. Due to the small 
attendance, we just ran through how the project works (docs like code and all 
that).
Discussed the docs ML, IRC, and how best to get started (find some low hanging 
fruit). Ian also took the group through the translation team process, and gave 
a little demo on how the Zanata translation tool was used.

We gave everyone back 30 minutes of their lives.

On 5/19/17, 2:22 PM, "Sean Dague"  wrote:

This is a thread for anyone that participated in the onboarding rooms,
on either the presenter or audience side. Because we all went into this
creating things from whole cloth, I'm sure there are lots of lessons
learned.

If you ran a room, please post the project, what you did in the room,
what you think worked, what you would have done differently. If you
attended a room you didn't run, please provide feedback about which one
it was, and what you thought worked / didn't work from the other side of
the table.

Hopefully we can consolidate some of that feedback for best practices
going forward.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Anne Gentle
On Mon, May 22, 2017 at 4:39 AM, Alexandra Settle 
wrote:

> Hi everyone,
>
>
>
> The documentation team are rapidly losing key contributors and core
> reviewers. We are not alone, this is happening across the board. It is
> making things harder, but not impossible.
>
> Since our inception in 2010, we’ve been climbing higher and higher trying
> to achieve the best documentation we could, and uphold our high standards.
> This is something to be incredibly proud of.
>
>
>
> However, we now need to take a step back and realise that the amount of
> work we are attempting to maintain is now out of reach for the team size
> that we have. At the moment we have 13 cores, of whom none are full time
> contributors or reviewers. This includes myself.
>

One point I'd like to emphasize with this proposal, any way we go, is that
we would prefer that the writing tasks not always fall on the devs, but
that there can be dedicated writers or ops or end-users attending to info
needs, it's just that they'll do the work in the repos.

Also, I'm working on a patch to try to quantify the best practices using
our current data: https://review.openstack.org/#/c/461280/ We may discover
some ways to work that mean gaining efficiencies and ensuring quality.
Project teams should consider changes to reviewers and so on to try to be
inclusive of the varied types of work in their repo.

I'll emphasize that we need to be extremely protective of the user space
with this sort of move. No one who reads the docs ultimately cares about
how they are put together. They just want to find what they need and get on
with their lives.


>
>
> Until this point, the documentation team has owned several manuals that
> include content related to multiple projects, including an installation
> guide, admin guide, configuration guide, networking guide, and security
> guide. Because the team no longer has the resources to own that content, we
> want to invert the relationship between the doc team and project teams, so
> that we become liaisons to help with maintenance instead of asking for
> project teams to provide liaisons to help with content. As a part of that
> change, we plan to move the existing content out of the central manuals
> repository, into repositories owned by the appropriate project teams.
> Project teams will then own the content and the documentation team will
> assist by managing the build tools, helping with writing guidelines
> and style, but not writing the bulk of the text.
>
>
>
> We currently have the infrastructure set up to empower project teams to
> manage their own documentation in their own tree, and many do. As part of
> this change, the rest of the existing content from the install guide and
> admin guide will also move into project-owned repositories. We have a few
> options for how to implement the move, and that's where we need feedback
> now.
>
>
>
> 1. We could combine all of the documentation builds, so that each project
> has a single doc/source directory that includes developer, contributor, and
> user documentation. This option would reduce the number of build jobs we
> have to run, and cut down on the number of separate sphinx configurations
> in each repository. It would completely change the way we publish the
> results, though, and we would need to set up redirects from all of the
> existing locations to the new locations and move all of the existing
> documentation under the new structure.
>

I'd love to try this one. I know this is what John Dickenson has tried for
the swift project with https://review.openstack.org/#/c/386834/ but since
it didn't match anyone else, and I haven't heard back yet about the user
experience, we didn't pursue much.

I'll still be pretty adamant about the user experience, so that the project
name does not spill over into the user space. Redirects will be crucial as
someone pointed out in one of the recent etherpads. Also, it may require
not publishing api-ref info to developer.openstack.org (in other words, one
job means one target for publication right now).


>
>
> 2. We could retain the existing trees for developer and API docs, and add
> a new one for "user" documentation. The installation guide, configuration
> guide, and admin guide would move here for all projects. Neutron's user
> documentation would include the current networking guide as well. This
> option would add 1 new build to each repository, but would allow us to
> easily roll out the change with less disruption in the way the site is
> organized and published, so there would be less work in the short term.
>
>
>
> 3. We could do option 2, but use a separate repository for the new
> user-oriented documentation. This would allow project teams to delegate
> management of the documentation to a separate review project-sub-team, but
> would complicate the process of landing code and documentation updates
> together so that the docs are always up to date.
>

It's possible the data could point us in 

Re: [openstack-dev] [ironic] [tripleo] [dib] RFC: moving/transitioning the ironic-agent element to the ironic-python-agent tree

2017-05-22 Thread Sam Betts (sambetts)
I would like to suggest that we create a new repo for housing the tools 
required to build ironic python agent images: 
ironic-python-agent-builder(tooling). This would include, the DIB element, the 
existing coreos and tinyipa methods and hopefully in the future the buildroot 
method for creating IPA images.

The reason I propose a separation of tooling and IPA itself is that the tooling 
is mostly detached from which version of IPA is being built into the image, and 
often when we make changes to the tooling that change should be included in 
images built for all versions of IPA which involves us having to backport these 
changes to all currently maintained versions of IPA.

Hopefully having this as a separate repo will also simplify packaging for 
distros as they won’t need to include IPA itself with the tooling to build it.

I’m happy with the name ironic-python-agent for the element, I think that is 
more intuitive anyway.

An RFE or multiple might be useful for tracking this work.

Sam

On 22/05/2017, 13:40, "Dmitry Tantsur"  wrote:

Hi all!

Some time ago we discussed moving ironic-agent element that is used to 
build IPA 
to IPA tree itself. It got stuck, and I'd like to restart the discussion.

The reason for this move is to make the DIB element in question one of 
*official* ways to build IPA. This includes gating on both IPA and the 
element 
changes, which we currently don't do.

The primary concern IIRC was elements name clash. We can solve it by just 
renaming the element. The new one will be called "ironic-python-agent".

 From the packaging perspective, we'll create a new subpackage 
openstack-ironic-python-agent-elements (the RDO name, may differ for other 
distribution) that will only ship /usr/share/ironic-python-agent-elements 
with 
the ironic-python-agent element within it. To pick the new element, the 
consumers will have to add /usr/share/ironic-python-agent-elements to the 
ELEMENTS_PATH, and change the element name from ironic-agent to 
ironic-python-agent.

Please let me know what you think about the approach. If there are no 
objects, 
I'll work on this move in the coming weeks.

P.S.
Do we need an Ironic RFE for that?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-22 Thread Telles Nobrega
Project: Sahara
Attendees: 6-8 (1 never involved in Sahara)

We worked on a quick overview of how Sahara works and planned to work a
little on code. Since most of the people there worked on Sahara already the
code introduction didn't make a lot of sense since the only rookie was most
interested in how to deploy and use sahara on his environment. So the
conversation took an unexpected turn and we talked more on how Sahara could
be a solution for an specific use case.

Overwall it worked well, but not as we planned from the beggining.



On Mon, May 22, 2017 at 5:20 AM Steven Hardy  wrote:

> On Fri, May 19, 2017 at 09:22:07AM -0400, Sean Dague wrote:
> > This is a thread for anyone that participated in the onboarding rooms,
> > on either the presenter or audience side. Because we all went into this
> > creating things from whole cloth, I'm sure there are lots of lessons
> > learned.
> >
> > If you ran a room, please post the project, what you did in the room,
> > what you think worked, what you would have done differently. If you
> > attended a room you didn't run, please provide feedback about which one
> > it was, and what you thought worked / didn't work from the other side of
> > the table.
>
> TripleO:
> Attendees - nearly full room (~30 people?)
>
> We took an informal approach to our session, we polled the room asking for
> questions, and on request gave an architectural overview and some
> code/template walkthroughs, then had open questions/discussion for the
> remainder of the session.
>
> Overall it worked quite well, but next time I would like visibility of
> some specific questions/topics ahead of time to enable better preparation
> of demo/slide content, and also we should have prepared a demo environment
> prior to the session to enable easier hands-on examples/demos.
>
> Overall I thought the new track was a good idea, and the feedback I got
> from those attending was positive.
>
> The slides we used are linked from this blog post:
>
>
> http://hardysteven.blogspot.co.uk/2017/05/openstack-summit-tripleo-project.html
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat I 

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Boston Forum session recap - searchlight integration

2017-05-22 Thread Belmiro Moreira
Hi Matt,
if by "incomplete results" you mean retrieve the instances UUIDs (in the
cell_api) for the cells that failed to answer,
I would prefer to have incomplete results than a failed operation.

Belmiro

On Mon, May 22, 2017 at 11:39 AM, Matthew Booth  wrote:

> On 19 May 2017 at 20:07, Mike Bayer  wrote:
>
>>
>>
>> On 05/19/2017 02:46 AM, joehuang wrote:
>>
>>> Support sort and pagination together will be the biggest challenge: it's
>>> up to how many cells will be involved in the query, 3,5 may be OK, you can
>>> search each cells, and cached data. But how about 20, 50 or more, and how
>>> many data will be cached?
>>>
>>
>>
>> I've talked to Matthew in Boston and I am also a little concerned about
>> this.The approach involves trying to fetch just the smallest number of
>> records possible from each backend, merging them as they come in, and then
>> discarding the rest (unfetched) once there's enough for a page. But there
>> is latency around invoking query before any results are received, and the
>> database driver really wants to send out all the rows as well, not to
>> mention the ORM (with configurability) wants to convert the whole set of
>> rows received to objects, all has overhead.
>>
>
> There was always going to come a point where there are too many cells for
> this approach to be viable. After our chat, I now think that point is
> considerably lower than I thought before, as I didn't appreciate that the
> ORM is also doing its own batching.
>
>
>> To at least handle the problem of 50 connections that have all executed a
>> statement and waiting on results, to parallelize that means there needs to
>> be a threadpool , greenlet pool, or explicit non-blocking approach put in
>> place.  The "thread pool" would be the approach that's possible, which with
>> eventlet monkeypatching transparently becomes a greenlet pool.  But that's
>> where this starts getting a little intense for something you want to do in
>> the context of "a web request".   So I think the DB-based solution here is
>> feasible but I'm a little skeptical of it at higher scale.   Usually, the
>> search engine would be something pluggable, like, "SQL" or "searchlight".
>>
>
> I'm not overly concerned about the threading aspect. I understood from our
> chat that the remote query overhead (being the only part we can actually
> parallelise anyway) is incurred entirely before returning the first row
> from SQLA. My plan is simply to fetch the first row of each query using
> concurrent.futures to allow all the remote queries to run in parallel, and
> all subsequent rows with blocking IO in the main thread. This will be
> relatively uncomplicated, and after the initial queries have run won't
> involve a whole lot of thread switching.
>
> There are also a couple of optimisations to make which I won't bother with
> up front. Dan suggested in his CellsV2 talk that we would only query cells
> where the user actually has instances. If we find users tend to clump in a
> small number of cells this would be a significant optimisation, although
> the overhead on the api node for a query returning no rows is probably very
> little. Also, I think you mentioned that there's an option to tell SQLA not
> to batch-process rows, but that it is less efficient for total throughput?
> I suspect there would be a point at which we'd want that. If there's a
> reasonable way to calculate a tipping point, that might give us some
> additional life.
>
> Bear in mind that the principal advantages to not using Searchlight are:
>
> * It is simpler to implement
> * It is simpler to manage
> * It will return accurate results
>
> Following the principal of 'as simple as possible, but no simpler', I
> think there's enormous benefit to this much simpler approach for anybody
> who doesn't need a more complex approach. However, while it reduces the
> urgency of something like the Searchlight solution, I expect there are
> going to be deployments which need that.
>
>
>>> More over, during the query there are instances operation( create,
>>> delete)  in parallel during the pagination/sort query, there is situation
>>> some cells may not provide response in time, or network connection broken,
>>> etc, many abnormal cases may happen. How to deal with some of cells
>>> abnormal query response is also one great factor to be considered.
>>>
>>
> Aside: For a query operation, what's the better user experience when a
> single cell is failing:
>
> 1. The whole query fails.
> 2. The user gets incomplete results.
>
> Either of these are simple to implement. Incomplete results would also
> additionally be logged as an ERROR, but I can't think of any way to also
> return to the user that there's a problem with the data we returned without
> throwing an error.
>
> Thoughts?
>
> Matt
>
>
>>
>>> It's not good idea to support pagination and sort at the same time (may
>>> not provide exactly the result end user want) if searchlight should not be

Re: [openstack-dev] [puppet] [tempest] L2GW not standard Tempest Plugin. How to get tests into CI?

2017-05-22 Thread Kevin Benton
Can you file a patch to adjust tox.ini of l2gw to make it the same as the
others?

On May 22, 2017 7:35 AM, "Ricardo Noriega De Soto" 
wrote:

> Hello guys,
>
> I'm trying to enable some tempest tests into puppet-openstack-integration
> project. I basically did the same procedure as with other Neutron drivers
> but tests were not being executed:
>
> https://review.openstack.org/#/c/460080/
>
> If you check the puppet-tempest patch, I enable the "l2gw" driver in
> tempest.conf under the service_avaiblable section:
>
> https://review.openstack.org/#/c/459712/
>
> However, the way these tests are called slightly different:
>
> https://github.com/openstack/networking-l2gw/tree/master/
> networking_l2gw/tests
>
> https://github.com/openstack/networking-l2gw/blob/master/tox.ini#L50-L53
>
> Is there any recommendation on how to approach this?? I don't think
> setting environment variables in puppet-openstack-integration is
> acceptable. I would love to get some advice around this.
>
> Thank you guys!!
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
> Red Hat
> irc: rnoriega @freenode
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [tripleo] [dib] RFC: moving/transitioning the ironic-agent element to the ironic-python-agent tree

2017-05-22 Thread Dmitry Tantsur

Hi all!

Some time ago we discussed moving ironic-agent element that is used to build IPA 
to IPA tree itself. It got stuck, and I'd like to restart the discussion.


The reason for this move is to make the DIB element in question one of 
*official* ways to build IPA. This includes gating on both IPA and the element 
changes, which we currently don't do.


The primary concern IIRC was elements name clash. We can solve it by just 
renaming the element. The new one will be called "ironic-python-agent".


From the packaging perspective, we'll create a new subpackage 
openstack-ironic-python-agent-elements (the RDO name, may differ for other 
distribution) that will only ship /usr/share/ironic-python-agent-elements with 
the ironic-python-agent element within it. To pick the new element, the 
consumers will have to add /usr/share/ironic-python-agent-elements to the 
ELEMENTS_PATH, and change the element name from ironic-agent to ironic-python-agent.


Please let me know what you think about the approach. If there are no objects, 
I'll work on this move in the coming weeks.


P.S.
Do we need an Ironic RFE for that?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][Tacker] Not able to run user_data commands on my instance

2017-05-22 Thread Vishnu Pajjuri
 Hi,

   I'm have installed openstack with tacker by devstack.

I'm able to run OpenWRT vnf and able to configure the firewall service with
openwrt management driver.

And also able to run shell commands in cirros image which is also using
openwrt management driver.


Now I have created one ubuntu image, and able to launch through tacker.

In this instance i want run some shell commands through tacker's user_data
feature.

But no commands are executing.

Is it possible to run commands on custom images unlike cirros/openwrt?

If yes kindly share the procedure to create proper ubuntu image.



Below is tosca configd file


tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

description: Demo with user-data

metadata:
  template_name: sample-vnfd-userdata

topology_template:
  node_templates:
VDU1:
  type: tosca.nodes.nfv.VDU.Tacker
  capabilities:
nfv_compute:
  properties:
num_cpus: 1
mem_size: 1024 MB
disk_size: 1 GB
  properties:
image: ubuntu-image
config: |
  param0: key1
  param1: key2
mgmt_driver: openwrt
config_drive: true
user_data_format: RAW
user_data: |
  #!/bin/sh
  echo "my hostname is `hostname`" > /tmp/hostname
  date > /tmp/date
  ifconfig > /tmp/ifconfig
  df -h > /tmp/diskinfo
CP1:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
management: true
order: 0
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

 VL1:
  type: tosca.nodes.nfv.VL
  properties:
network_name: net_mgmt
vendor: ACME

Regards,
-Vishnu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-22 Thread Sean Dague
On 05/21/2017 10:09 PM, Mike Bayer wrote:

>>
>> A similar issue lurks with the fact that MySQL unicode storage is
>> 3-byte by default and 4-byte is opt-in. We could take the 'external'
>> approach and document it and assume the operator has configured their
>> my.cnf with the appropriate default, or taken an 'active' approach
>> where we override it in all the models and make migrations to get us
>> from 3 to 4 byte.
> 
> let's force MySQL to use utf8mb4!   Although I am curious what is the
> actual use case we want to hit here (which gets into, zzzeek is ignorant
> as to which unicode glyphs actually live in 4-byte utf8 characters).

There are sets of existing CJK ideographs in the 4-byte range, and the
reality is that all the world's languages are still not encoded in
unicode, so more Asian languages probably land in here in the future.

We've had specific bug reports in Nova on this, but it's actually a lot
to dig out of because that db migration seems expensive.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [tempest] L2GW not standard Tempest Plugin. How to get tests into CI?

2017-05-22 Thread Ricardo Noriega De Soto
Hello guys,

I'm trying to enable some tempest tests into puppet-openstack-integration
project. I basically did the same procedure as with other Neutron drivers
but tests were not being executed:

https://review.openstack.org/#/c/460080/

If you check the puppet-tempest patch, I enable the "l2gw" driver in
tempest.conf under the service_avaiblable section:

https://review.openstack.org/#/c/459712/

However, the way these tests are called slightly different:

https://github.com/openstack/networking-l2gw/tree/master/networking_l2gw/tests

https://github.com/openstack/networking-l2gw/blob/master/tox.ini#L50-L53

Is there any recommendation on how to approach this?? I don't think setting
environment variables in puppet-openstack-integration is acceptable. I
would love to get some advice around this.

Thank you guys!!

-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Core team updates

2017-05-22 Thread Mathieu, Pierre-Arthur
+1

Thank you for your contributions Vitaly!

From: Saad Zaher 
Sent: Monday, May 22, 2017 11:30:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [freezer] Core team updates

Hi,

I would like to propose some core member updates to the Freezer core team. I 
would like to add the following user(s) to core as they became they're actively 
contributing and review code upstream as well as helping people in the irc 
channel:


  *   Vitaliy Nogin ( vnogin )

Please, If you agree to these changes vote +1 otherwise -1 and explain your 
opinion.

If there is no objection, I plan to add him before the end of this week.

--
Best Regards,
Saad!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Core team updates

2017-05-22 Thread Saad Zaher
Hi,

I would like to propose some core member updates to the Freezer core team.
I would like to add the following user(s) to core as they became they're
actively contributing and review code upstream as well as helping people in
the irc channel:


   - Vitaliy Nogin ( vnogin )


Please, If you agree to these changes vote +1 otherwise -1 and explain your
opinion.

If there is no objection, I plan to add him before the end of this week.

--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Dmitry Tantsur

On 05/22/2017 11:39 AM, Alexandra Settle wrote:

Hi everyone,

The documentation team are rapidly losing key contributors and core reviewers. 
We are not alone, this is happening across the board. It is making things 
harder, but not impossible.


Since our inception in 2010, we’ve been climbing higher and higher trying to 
achieve the best documentation we could, and uphold our high standards. This is 
something to be incredibly proud of.


However, we now need to take a step back and realise that the amount of work we 
are attempting to maintain is now out of reach for the team size that we have. 
At the moment we have 13 cores, of whom none are full time contributors or 
reviewers. This includes myself.


Until this point, the documentation team has owned several manuals that include 
content related to multiple projects, including an installation guide, admin 
guide, configuration guide, networking guide, and security guide. Because the 
team no longer has the resources to own that content, we want to invert the 
relationship between the doc team and project teams, so that we become liaisons 
to help with maintenance instead of asking for project teams to provide liaisons 
to help with content. As a part of that change, we plan to move the existing 
content out of the central manuals repository, into repositories owned by the 
appropriate project teams. Project teams will then own the content and the 
documentation team will assist by managing the build tools, helping with writing 
guidelines and style, but not writing the bulk of the text.


We currently have the infrastructure set up to empower project teams to manage 
their own documentation in their own tree, and many do. As part of this change, 
the rest of the existing content from the install guide and admin guide will 
also move into project-owned repositories. We have a few options for how to 
implement the move, and that's where we need feedback now.


1. We could combine all of the documentation builds, so that each project has a 
single doc/source directory that includes developer, contributor, and user 
documentation. This option would reduce the number of build jobs we have to run, 
and cut down on the number of separate sphinx configurations in each repository. 
It would completely change the way we publish the results, though, and we would 
need to set up redirects from all of the existing locations to the new 
locations and move all of the existing documentation under the new structure.


2. We could retain the existing trees for developer and API docs, and add a new 
one for "user" documentation. The installation guide, configuration guide, and 
admin guide would move here for all projects. Neutron's user documentation would 
include the current networking guide as well. This option would add 1 new build 
to each repository, but would allow us to easily roll out the change with less 
disruption in the way the site is organized and published, so there would be 
less work in the short term.


3. We could do option 2, but use a separate repository for the new user-oriented 
documentation. This would allow project teams to delegate management of the 
documentation to a separate review project-sub-team, but would complicate the 
process of landing code and documentation updates together so that the docs are 
always up to date.


Personally, I think option 2 or 3 are more realistic, for now. It does mean 
that an extra build would have to be maintained, but it retains that key 
differentiator between what is user and developer documentation and involves 
fewer changes to existing published contents and build jobs. I definitely think 
option 1 is feasible, and would be happy to make it work if the community 
prefers this. We could also view option 1 as the longer-term goal, and option 2 
as an incremental step toward it (option 3 would make option 1 more complicated 
to achieve).


What does everyone think of the proposed options? Questions? Other thoughts?


We're already hosting install-guide and api-ref in our tree, and I'd prefer we 
don't change it, as it's going to be annoying (especially wrt backports). I'd 
prefer we create user-guide directory in projects, and move the user guide there.




Cheers,

Alex



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Boston Forum session recap - searchlight integration

2017-05-22 Thread Matthew Booth
On 19 May 2017 at 20:07, Mike Bayer  wrote:

>
>
> On 05/19/2017 02:46 AM, joehuang wrote:
>
>> Support sort and pagination together will be the biggest challenge: it's
>> up to how many cells will be involved in the query, 3,5 may be OK, you can
>> search each cells, and cached data. But how about 20, 50 or more, and how
>> many data will be cached?
>>
>
>
> I've talked to Matthew in Boston and I am also a little concerned about
> this.The approach involves trying to fetch just the smallest number of
> records possible from each backend, merging them as they come in, and then
> discarding the rest (unfetched) once there's enough for a page. But there
> is latency around invoking query before any results are received, and the
> database driver really wants to send out all the rows as well, not to
> mention the ORM (with configurability) wants to convert the whole set of
> rows received to objects, all has overhead.
>

There was always going to come a point where there are too many cells for
this approach to be viable. After our chat, I now think that point is
considerably lower than I thought before, as I didn't appreciate that the
ORM is also doing its own batching.


> To at least handle the problem of 50 connections that have all executed a
> statement and waiting on results, to parallelize that means there needs to
> be a threadpool , greenlet pool, or explicit non-blocking approach put in
> place.  The "thread pool" would be the approach that's possible, which with
> eventlet monkeypatching transparently becomes a greenlet pool.  But that's
> where this starts getting a little intense for something you want to do in
> the context of "a web request".   So I think the DB-based solution here is
> feasible but I'm a little skeptical of it at higher scale.   Usually, the
> search engine would be something pluggable, like, "SQL" or "searchlight".
>

I'm not overly concerned about the threading aspect. I understood from our
chat that the remote query overhead (being the only part we can actually
parallelise anyway) is incurred entirely before returning the first row
from SQLA. My plan is simply to fetch the first row of each query using
concurrent.futures to allow all the remote queries to run in parallel, and
all subsequent rows with blocking IO in the main thread. This will be
relatively uncomplicated, and after the initial queries have run won't
involve a whole lot of thread switching.

There are also a couple of optimisations to make which I won't bother with
up front. Dan suggested in his CellsV2 talk that we would only query cells
where the user actually has instances. If we find users tend to clump in a
small number of cells this would be a significant optimisation, although
the overhead on the api node for a query returning no rows is probably very
little. Also, I think you mentioned that there's an option to tell SQLA not
to batch-process rows, but that it is less efficient for total throughput?
I suspect there would be a point at which we'd want that. If there's a
reasonable way to calculate a tipping point, that might give us some
additional life.

Bear in mind that the principal advantages to not using Searchlight are:

* It is simpler to implement
* It is simpler to manage
* It will return accurate results

Following the principal of 'as simple as possible, but no simpler', I think
there's enormous benefit to this much simpler approach for anybody who
doesn't need a more complex approach. However, while it reduces the urgency
of something like the Searchlight solution, I expect there are going to be
deployments which need that.


>> More over, during the query there are instances operation( create,
>> delete)  in parallel during the pagination/sort query, there is situation
>> some cells may not provide response in time, or network connection broken,
>> etc, many abnormal cases may happen. How to deal with some of cells
>> abnormal query response is also one great factor to be considered.
>>
>
Aside: For a query operation, what's the better user experience when a
single cell is failing:

1. The whole query fails.
2. The user gets incomplete results.

Either of these are simple to implement. Incomplete results would also
additionally be logged as an ERROR, but I can't think of any way to also
return to the user that there's a problem with the data we returned without
throwing an error.

Thoughts?

Matt


>
>> It's not good idea to support pagination and sort at the same time (may
>> not provide exactly the result end user want) if searchlight should not be
>> integrated.
>>
>> In fact in Tricircle, when query ports from neutron where tricircle
>> central plugin is installed, the tricircle central plugin do the similar
>> cross local Neutron ports query, and not support pagination/sort together.
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>>
>> 
>> From: Matt Riedemann [mriede...@gmail.com]
>> Sent: 19 May 2017 5:21
>> To: 

[openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Alexandra Settle
Hi everyone,

The documentation team are rapidly losing key contributors and core reviewers. 
We are not alone, this is happening across the board. It is making things 
harder, but not impossible.
Since our inception in 2010, we’ve been climbing higher and higher trying to 
achieve the best documentation we could, and uphold our high standards. This is 
something to be incredibly proud of.

However, we now need to take a step back and realise that the amount of work we 
are attempting to maintain is now out of reach for the team size that we have. 
At the moment we have 13 cores, of whom none are full time contributors or 
reviewers. This includes myself.

Until this point, the documentation team has owned several manuals that include 
content related to multiple projects, including an installation guide, admin 
guide, configuration guide, networking guide, and security guide. Because the 
team no longer has the resources to own that content, we want to invert the 
relationship between the doc team and project teams, so that we become liaisons 
to help with maintenance instead of asking for project teams to provide 
liaisons to help with content. As a part of that change, we plan to move the 
existing content out of the central manuals repository, into repositories owned 
by the appropriate project teams. Project teams will then own the content and 
the documentation team will assist by managing the build tools, helping with 
writing guidelines and style, but not writing the bulk of the text.

We currently have the infrastructure set up to empower project teams to manage 
their own documentation in their own tree, and many do. As part of this change, 
the rest of the existing content from the install guide and admin guide will 
also move into project-owned repositories. We have a few options for how to 
implement the move, and that's where we need feedback now.

1. We could combine all of the documentation builds, so that each project has a 
single doc/source directory that includes developer, contributor, and user 
documentation. This option would reduce the number of build jobs we have to 
run, and cut down on the number of separate sphinx configurations in each 
repository. It would completely change the way we publish the results, though, 
and we would need to set up redirects from all of the existing locations to the 
new locations and move all of the existing documentation under the new 
structure.

2. We could retain the existing trees for developer and API docs, and add a new 
one for "user" documentation. The installation guide, configuration guide, and 
admin guide would move here for all projects. Neutron's user documentation 
would include the current networking guide as well. This option would add 1 new 
build to each repository, but would allow us to easily roll out the change with 
less disruption in the way the site is organized and published, so there would 
be less work in the short term.

3. We could do option 2, but use a separate repository for the new 
user-oriented documentation. This would allow project teams to delegate 
management of the documentation to a separate review project-sub-team, but 
would complicate the process of landing code and documentation updates together 
so that the docs are always up to date.

Personally, I think option 2 or 3 are more realistic, for now. It does mean 
that an extra build would have to be maintained, but it retains that key 
differentiator between what is user and developer documentation and involves 
fewer changes to existing published contents and build jobs. I definitely think 
option 1 is feasible, and would be happy to make it work if the community 
prefers this. We could also view option 1 as the longer-term goal, and option 2 
as an incremental step toward it (option 3 would make option 1 more complicated 
to achieve).

What does everyone think of the proposed options? Questions? Other thoughts?

Cheers,

Alex


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] can we make everyone drop eventlet? (was: Can we stop global requirements update?)

2017-05-22 Thread ChangBo Guo
We discussed this for a long time,  As I know , some projects use eventlet
heavily like Nova [1],
If everyon agree with removing eventlet , we can set it as one of comunnity
goals[2]

[1]https://github.com/openstack/nova/blob/master/doc/source/threading.rst
[2]https://etherpad.openstack.org/p/community-goals

2017-05-20 5:09 GMT+08:00 Mike Bayer :

> FTFY
>
>
>
> On 05/19/2017 03:58 PM, Joshua Harlow wrote:
>
>> Mehdi Abaakouk wrote:
>>
>>> Not really, I just put some comments on reviews and discus this on IRC.
>>> Since nobody except Telemetry have expressed/try to get rid of eventlet.
>>>
>>
>> Octavia is using cotyledon and they have gotten rid of eventlet. Didn't
>> seem like it was that hard either to do it (of course the experience in how
>> easy it was is likely not transferable to other projects...)
>>
>> -Josh
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-22 Thread Thierry Carrez
Mike Bayer wrote:
> On 05/18/2017 06:13 PM, Adrian Turjak wrote:
>>
>> So, specifically in the realm of Keystone, since we are using sqlalchemy
>> we already have Postgresql support, and since Cockroachdb does talk
>> Postgres it shouldn't be too hard to back Keystone with it. At that
>> stage you have a Keystone DB that could be multi-region, multi-master,
>> consistent, and mostly impervious to disaster. Is that not the holy
>> grail for a service like Keystone? Combine that with fernet tokens and
>> suddenly Keystone becomes a service you can't really kill, and can
>> mostly forget about.
> 
> So this is exhibit A for why I think keeping some level of "this might
> need to work on other databases" within a codebase is always a great
> idea even if you are not actively supporting other DBs at the moment.
> Even if Openstack dumped Postgresql completely, I'd not take the
> rudimental PG-related utilities out of oslo.db nor would I rename all
> the "mysql_XYZ" facilities to be "XYZ".
> [...]
Yes, that sounds like another reason why we'd not want to aggressively
contract to the MySQL family of databases. At the very least, before we
do that, we should experiment with CockroachDB and see how reasonable it
would be to use in an OpenStack context. It might (might) hit a sweet
spot between performance, durability, database decentralization and
keeping SQL advanced features -- I'd hate it if we discovered that too late.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-22 Thread Thomas Herve
On Fri, May 19, 2017 at 5:00 PM, Lance Haig  wrote:
> Hi,

Hi Lance,

Thanks for starting this. Comments inline.

> As we know the heat-templates repository has become out of date in some
> respects and also has been difficult to be maintained from a community
> perspective.

While it has been out of date, I'm not sure it's because it's been
difficult. We just don't have the manpower or didn't dedicate enough
time to it.

> For me the repository is quiet confusing with different styles that are used
> to show certain aspects and other styles for older template examples.
>
> This I think leads to confusion and perhaps many people who give up on heat
> as a resource as things are not that clear.
>
> From discussions in other threads and on the IRC channel I have seen that
> there is a need to change things a bit.
>
>
> This is why I would like to start the discussion that we rethink the
> template example repository.
>
> I would like to open the discussion with mys suggestions.
>
> We need to differentiate templates that work on earlier versions of heat
> that what is the current supported versions.
>
> I have suggested that we create directories that relate to different
> versions so that you can create a stable version of examples for the heat
> version and they should always remain stable for that version and once it
> goes out of support can remain there.
> This would mean people can find their version of heat and know these
> templates all work on their version

So, a couple of things:
* Templates have a version field. This clearly shows on which version
that template ought to work.
* Except when some resources changed (neutron loadbalancer, some
ceilometer stuff), old templates should still work. If they don't,
it's a bug. Obviously we won't fix it on unmaintained versions, but we
work really hard at maintaining compatibility. I'd be surprised to
find templates that are really broken.

It'd probably be nice to update all templates to the latest supported
version. But we don't remove old versions of templates, so it's also
good to keep them around, if updating the versions doesn't bring
anything new.

> We should consider adding a docs section that that includes training for new
> users.
>
> I know that there are documents hosted in the developer area and these could
> be utilized but I would think having a documentation section in the
> repository would be a good way to keep the examples and the documents in the
> same place.
> This docs directory could also host some training for new users and old ones
> on new features etc.. In a similar line to what is here in this repo
> https://github.com/heat-extras/heat-tutorial

I'd rather see documentation in the main repository. It's nice to have
some stuff in heat-templates, but there is little point if the doc
isn't published anywhere. Maybe we could have links?

> We should include examples form the default hooks e.g. ansible salt etc...
> with SoftwareDeployments.
>
> We found this quiet helpful for new users to understand what is possible.

We have those AFAIU:
https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates

> We should make sure that the validation running against the templates runs
> without ignoring errors.
>
> This was noted in IRC that some errors were ignored as the endpoints or
> catalog was not available. It would be good to have some form of headless
> catalog server that tests can be run against so that developers of templates
> can validate before submitting patches.

Yep that's a good idea, I've written a patch here:
https://review.openstack.org/465860

Thanks,

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-22 Thread Steven Hardy
On Fri, May 19, 2017 at 09:22:07AM -0400, Sean Dague wrote:
> This is a thread for anyone that participated in the onboarding rooms,
> on either the presenter or audience side. Because we all went into this
> creating things from whole cloth, I'm sure there are lots of lessons
> learned.
> 
> If you ran a room, please post the project, what you did in the room,
> what you think worked, what you would have done differently. If you
> attended a room you didn't run, please provide feedback about which one
> it was, and what you thought worked / didn't work from the other side of
> the table.

TripleO:
Attendees - nearly full room (~30 people?)

We took an informal approach to our session, we polled the room asking for
questions, and on request gave an architectural overview and some
code/template walkthroughs, then had open questions/discussion for the
remainder of the session.

Overall it worked quite well, but next time I would like visibility of
some specific questions/topics ahead of time to enable better preparation
of demo/slide content, and also we should have prepared a demo environment
prior to the session to enable easier hands-on examples/demos.

Overall I thought the new track was a good idea, and the feedback I got
from those attending was positive.

The slides we used are linked from this blog post:

http://hardysteven.blogspot.co.uk/2017/05/openstack-summit-tripleo-project.html

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Project On-Boarding Info Collection

2017-05-22 Thread joehuang
Hello,

Not use a training slides in Tricricle on-boarding session, and would like to 
create one for new contributors later. For the summary of the on-boarding 
session, have replied in another thread which collects feedback.

Best Regards
Chaoyi Huang (joehuang)

From: Kendall Nelson [kennelso...@gmail.com]
Sent: 09 May 2017 2:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] Project On-Boarding Info Collection

Hello!

If you are running a project onboarding session and have etherpads/slides/ etc 
you are using to educate new contributors  please send them to me! I am 
collecting all the resources you are sharing into a single place for people 
that weren't able to attend sessions.

Thanks!

Kendall (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Room during the next PTG

2017-05-22 Thread Hanxi Liu
Hi team,
>
> It's time for us to request a room (or share one) for the next PTG in
> September if we want to meet. Last time we did not. Do we want one this
> time?
>
>
>
+1. I always thought it's a pity that we have no weekly meeting. The whole
Telemetry team need communications.
PTG room discussion can not only provide a good chance to communicate
within team but also attract more new people to contribute.
In my opinion, Discussion promotes the growth of the project. Maybe
statistics on the number of people who can attend the meeting is
more convincing.

cheers,

Hanxi Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-22 Thread joehuang
Tricircle shared a room with Sahara, and stay there for the first half.

Around 6 persons joined the session. Due to the network issue(on the lab side) 
I am not able to logon to my environment to do the training based on live 
environment. I have to play some recorded clips, during the playing, we 
discussed a lot of topics, from the overall architecture and functionalities, 
and whether it support cross Neutron L2 network, and how to setup the 
environment to experience it. I can't remember all detail information. It seems 
45 minutes is too short for on-boarding session, lots of other topics have not 
been discussed. We leave the room after Sahara began their session, two 
projects in same room will be quite noise, many people will talk at the same 
time. After the session, one guy continue to talk with me about Tricircle for 
around half an hour.

Obviously, on-boarding session is necessary for a project, some may be 
contributors, some may be not, but there are lots of people want to learn a 
project in more detail, it'll help a project to grow contributors and 
(potential) operators.

Best Regards
Chaoyi Huang (joehuang)


From: Sean Dague [s...@dague.net]
Sent: 19 May 2017 21:22
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, 
what worked, lessons learned

This is a thread for anyone that participated in the onboarding rooms,
on either the presenter or audience side. Because we all went into this
creating things from whole cloth, I'm sure there are lots of lessons
learned.

If you ran a room, please post the project, what you did in the room,
what you think worked, what you would have done differently. If you
attended a room you didn't run, please provide feedback about which one
it was, and what you thought worked / didn't work from the other side of
the table.

Hopefully we can consolidate some of that feedback for best practices
going forward.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev