Re: [openstack-dev] [ironic] Summit recap

2016-05-12 Thread Yuiko Takada
Hi,

Jim, thank you for recap and blog!

> # Newton priorities
>
> [Etherpad](
https://etherpad.openstack.org/p/ironic-newton-summit-priorities)
>
> We discussed our priorities for the Newton cycle here.
>
> Of note, we decided that we need to get cold upgrade testing (i.e.
> grenade) running ASAP. We have lots of large changes lined up that feel
> like they could easily break upgrades, and want to be able to test them.
> Much of the team is jumping in to help get this going.
>
> The priorities for the cycle have been published
> [here](
http://specs.openstack.org/openstack/ironic-specs/priorities/newton-priorities.html
).
I'd like to discuss about priority.
I don't think that we should not implement items not in priority list.
But, we need to concentrate on high-priority items which need to be
completed in this cycle.
Especially, core developers are. Because they have Workflow+1 privilege :)

I think there are 2 concerns.
one is, we need to set scope in the cycle.
For example, we have implemented Neutron integration from L cycle,
and we want to complete it in this cycle.
There are many things which we want, and to complete everything takes very
long time.
So that, we need to set priority in Neutron integration also, and we need
to give up to implement
some items and implement them in the next cycle.

Another is, we need to be strict about priority a little bit.
Recently, I saw the presentation
"How Open Source Projects Survive Poisonous People (And You Can Too)".
https://www.youtube.com/watch?v=Q52kFL8zVoM
In this presentation, subversion developers are talking about
how to manage OSS community.
They have a TODO list, and when a new proposal comes, if it is not in the
TODO list,
it will be denied.
Do we need to do the same thing? Of course, NO.
But, now is the time we need to concern about it...

Thank you for reading this long email.
What should we do to manage the project efficiently?
Everyone is welcome.


Best Regards,
Yuiko Takada Mori



2016-05-11 23:16 GMT+09:00 Jim Rollenhagen :

> Others made good points for posting this on the ML, so here it is in
> full. Sorry for the markdown formatting, I just copied this from the
> blog post.
>
> // jim
>
> Another cycle, another summit. The ironic project had ten design summit
> sessions to get together and chat about some of our current and future
> work. We also led a cross-project session on bare metal networking, had
> a joint session with nova, and a contributor's meetup for the first half
> of Friday. The following is a summary of those sessions.
>
> # Cross-project: the future of bare-metal networking
>
> [Etherpad](https://etherpad.openstack.org/p/newton-baremetal-networking)
>
> This session was meant to have the Nova, Ironic, and Neutron folks get
> together and figure out some of the details of the [work we're
> doing](https://review.openstack.org/#/c/277853/) to decouple the
> physical network infrastructure from the logical networking that users
> interact with. Unfortunately, we spent most of the time explaining the
> problem and the goals, and not much time actually figuring out how
> things should work. We were able to decide that the trunk port work in
> neutron should mostly work for us.
>
> There was plenty of hallway chats about this throughout the week, and
> from those I think we have a good idea of what needs to be done. The
> spec linked above will be updated soon to clarify where we are at here.
>
> # Nova-compatible serial and graphical consoles
>
> [Etherpad](https://etherpad.openstack.org/p/ironic-newton-summit-console)
>
> This session began with a number of proposals to implement serial and
> graphical consoles that would work with Nova, and a goal to narrow them
> down so folks can move forward with the code.
>
> The first thing we decided is that in the short term, we want to focus
> on the serial console. It's supported by almost all hardware and most
> cases where someone needs a console are covered by a simple serial
> console. We do want to do graphical consoles eventually, but would like
> to take one thing at a time.
>
> We then spent some time dissecting our requirements (and preferences)
> for what we want an implementation to do, which are listed toward the
> bottom of the etherpad.
>
> We narrowed the serial console work down to two implementations:
>
> * [ironic-console-server](https://review.openstack.org/#/c/306755/).
>   The tl;dr here is that the conductor will shell out to a command that
>   creates a listening port, and forks a process that connects to the
>   console, and pipes data between the two. This command is called once
>   per console session. The upside with this approach is that operators
>   don't need to do much when the change is deployed.
>
> * [ironic-ipmiproxy](https://review.openstack.org/#/c/296869/).
>   This is similar to ironic-console-server, except that it runs as its
>   own daemon with a small REST API for start/stop/get. It spawns a
>   process for each 

Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

2016-05-12 Thread Xav Paice
Thanks for explaining that - I thought I was going mad.  You're right about
implementation challenges!

TBH, I'm writing something that would work at least in our environment and
trying to keep it as small and simple as possible so we can maintain it -
currently one of our dev team is adding a feature or two to make Octavia
match our business requirements, and I'm working on the deployment.
Openstack-ansible is quite a new approach for our deployment (we've done
most things via puppet till now) - what I was really after is some examples
to scab from, but if I manage to beat you to it, it might wind up the other
way round.  The Puppet deployment has been really good till recently but
like many, we're now unable to do 'big bang' upgrades and the lack of
orchestration in Puppet is a real limitation.

I'm happy to be involved with the implementation, but until we're using
openstack-ansible for our deployments my ability to test/run things would
be quite limited.

Maybe this is the push I need to knuckle down and migrate.

On 12 May 2016 at 00:33, Major Hayden  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 05/10/2016 11:58 PM, Xav Paice wrote:
> > Sorry to dig up an ancient thread.
> >
> > I see the spec has been implemented, and in the os_neutron repo I see
> configs for the Haproxy driver for LOADBALANCERV2 - but not Octavia.  Am I
> missing something here?
>
> Hello Xav,
>
> No need to apologize -- I should have sent an update sooner. :)
>
> After a thorough review, we decided to go forth with LBaaSv2 via the agent
> since we needed something to quickly replace the now deprecated LBaaSv1
> API.  Octavia is still on the roadmap, but there are some implementation
> challenges that need more attention.
>
> I'm working to get more involved in some of the Octavia meetings and
> discussions so I can share the use cases of various OpenStack-Ansible
> operators.  Did you have some interest in helping with the implementation
> or are you eager to consume it once it's available?
>
> - --
> Major Hayden
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJXMyaMAAoJEHNwUeDBAR+xYwUP/iLfOuSOgW4TeOZ/pN0hkXuR
> H0L1suY6R+oGjDT+xuxox2uDcAADIWbHxBKosV/1jQHJRPoWfKhBhke4W2/MOsTV
> miqBrCKILLzJxdcXHrG54QHPb0FBqSLcmJIaFfysW1Rw3rH2btCSw8zoWNXipy39
> tYkDxh1z216gCIqNFSXSnpMgEj5D1LzAOZ1igBBOsYJYAwCvJp9XNcqAvN7FUg4C
> cvzSDztrb/r/CYtqqRYweD7vc70o/dz2Ej1wQn7ris0TrQHUiKU977NUMAiQmu+l
> 0YR/5FHV1VFMvZJGHv9J0gLWfq6sHhbqOOSLNuxtO9L99L25Knq72kOviipsYHWK
> IfIcP/s2KFIvX9mOrvMejXk2GKDSIb/vZ1LWTrS4Kg9i8rjVEroyHdO8/AHTpUK4
> bGbMcp3cqtTh1LHKu4NQh14SOvVwcR6hHVkRfcxO8l+YGghpexURjIOCGYGC+PI/
> Gk1t8bkW32x7+rZJHoiW/jBoWNR8l0ugFmS6VliJy9gufKEekCYZpIESPrnsHXjI
> 1NSOBv4QtpXXd+FJFNO2r9pRAzkj+CKrKQ9EJIr0wYbdiGEwic+CWOZwEw+Jsc5a
> DoQmS6iCVIZx5dxoO3Bes0F8k4Ov1mj2ZyYEN2JKArzArrDF/4NSDeSCNxYMGeRB
> 9NGm0sY4uHu6ZelxlFyS
> =sCy8
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][higgins][all] The first Higgins team meeting

2016-05-12 Thread Davanum Srinivas
Ihor,

Please see the notes from first meeting:
http://eavesdrop.openstack.org/meetings/higgins/2016/higgins.2016-05-13-03.01.log.html

On Thu, May 12, 2016 at 11:10 PM, Ihor Dvoretskyi
 wrote:
> Hongbin,
>
> Will the activities within the project be related to container orchestration
> systems (Kubernetes, Mesos, Swarm), or they will still live in the world of
> Magnum?
>
> Ihor
>
> On Wed, May 11, 2016 at 8:33 AM Hongbin Lu  wrote:
>>
>> Hi all,
>>
>>
>>
>> I am happy to announce that a new project (Higgins [1][2]) was created for
>> providing container service on OpenStack. The Higgins team will hold the
>> first team meeting at this Friday 0030 UTC [3]. At the first meeting, we
>> plan to collect requirements from interested individuals and drive consensus
>> on the project roadmap. Everyone is welcome to join. I hope to see you all
>> there.
>>
>>
>>
>> [1] https://review.openstack.org/#/c/313935/
>>
>> [2] https://wiki.openstack.org/wiki/Higgins
>>
>> [3] https://wiki.openstack.org/wiki/Higgins#Agenda_for_2016-05-13_0300_UTC
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> From: Hongbin Lu
>> Sent: May-03-16 11:31 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [magnum][all] Build unified abstraction for
>> all COEs
>>
>>
>>
>> Hi all,
>>
>>
>>
>> According to the decision in the design summit [1], we are going to narrow
>> the scope of the Magnum project [2]. In particular, Magnum will focus on
>> COEs deployment and management. The efforts of building unified container
>> abstraction will potentially go into a new project. My role here is to
>> collect interests for the new project, help to create a new team (if there
>> are enough interests), and then pass the responsibility to the new team. An
>> etherpad was created for this purpose:
>>
>>
>>
>> https://etherpad.openstack.org/p/container-management-service
>>
>>
>>
>> If you interest in contributing and/or leveraging the new container
>> service, I would request to have your name and requirements stated in the
>> etherpad. Your inputs will be appreciated.
>>
>>
>>
>> [1] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
>>
>> [2] https://review.openstack.org/#/c/311476/
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>> Sent: April-23-16 11:27 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>>
>>
>>
>> Magnum is not a COE installer. It offers multi tenancy from the ground up,
>> is well integrated with OpenStack services, and more COE features
>> pre-configured than you would get with an ordinary stock deployment. For
>> example, magnum offers integration with keystone that allows developer
>> self-service to get a native container service in a few minutes with the
>> same ease as getting a database server from Trove. It allows cloud operators
>> to set up the COE templates in a way that they can be used to fit policies
>> of that particular cloud.
>>
>>
>>
>> Keeping a COE working with OpenStack requires expertise that the Magnum
>> team has codified across multiple options.
>>
>> --
>>
>> Adrian
>>
>>
>> On Apr 23, 2016, at 2:55 PM, Hongbin Lu  wrote:
>>
>> I am not necessary agree with the viewpoint below, but that is the
>> majority viewpoints when I was trying to sell Magnum to them. There are
>> people who interested in adopting Magnum, but they ran away after they
>> figured out what Magnum actually offers is a COE deployment service. My
>> takeaway is COE deployment is not the real pain, and there are several
>> alternatives available (Heat, Ansible, Chef, Puppet, Juju, etc.). Limiting
>> Magnum to be a COE deployment service might prolong the existing adoption
>> problem.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
>> Sent: April-20-16 6:51 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>>
>>
>>
>> If Magnum will be focused on installation and management for COE it will
>> be unclear how much it is different from Heat and other generic
>> orchestrations.  It looks like most of the current Magnum functionality is
>> provided by Heat. Magnum focus on deployment will potentially lead to
>> another Heat-like  API.
>>
>> Unless Magnum is really focused on containers its value will be minimal
>> for OpenStack users who already use Heat/Orchestration.
>>
>>
>>
>>
>>
>> On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
>> wrote:
>>
>> Magnum doesn¹t have to preclude tight integration for single COEs you
>> speak of.  The heavy lifting of tight integration of the COE in to
>> OpenStack (so that it performs 

[openstack-dev] [keystone] Need info on the correct location to place the certificates / correct tags to specify the same

2016-05-12 Thread Rahul Sharma
Hi All,

While upgrading from Kilo to Liberty, I am seeing these warnings in the
logs:-

./keystone/keystone.log:2016-05-11 13:40:34.013 20402 WARNING
oslo_config.cfg [-] Option "certfile" from group "ssl" is deprecated. Use
option "certfile" from group "eventlet_server_ssl".
./keystone/keystone.log:2016-05-11 13:40:34.013 20402 WARNING
oslo_config.cfg [-] Option "certfile" from group "eventlet_server_ssl" is
deprecated for removal.  Its value may be silently ignored in the future.
./keystone/keystone.log:2016-05-11 13:40:34.013 20402 WARNING
oslo_config.cfg [-] Option "keyfile" from group "ssl" is deprecated. Use
option "keyfile" from group "eventlet_server_ssl".
./keystone/keystone.log:2016-05-11 13:40:34.013 20402 WARNING
oslo_config.cfg [-] Option "keyfile" from group "eventlet_server_ssl" is
deprecated for removal.  Its value may be silently ignored in the future.
./keystone/keystone.log:2016-05-11 13:40:34.013 20402 WARNING
oslo_config.cfg [-] Option "ca_certs" from group "ssl" is deprecated. Use
option "ca_certs" from group "eventlet_server_ssl".
./keystone/keystone.log:2016-05-11 13:40:34.013 20402 WARNING
oslo_config.cfg [-] Option "ca_certs" from group "eventlet_server_ssl" is
deprecated for removal.  Its value may be silently ignored in the future.

It looks like the parameters certfile, keyfile, ca_certs are going to be
deprecated(might be deprecated by now) in future releases. For running
keystone with TLS, I need to specify the location of my certificates in
some configuration file. Does the above logs mean that we are going to
store the certs in some standard/default directory? I tried to find any
documentation specifying these changes or any configuration updates needed
to support these changes, but couldn't find any. Can someone please help me
out in identifying where the right configuration should be?

Thanks.

*Rahul Sharma*
*MS in Computer Science, 2016*
College of Computer and Information Science, Northeastern University
Mobile:  801-706-7860
Email: rahulsharma...@gmail.com
Linkedin: www.linkedin.com/in/rahulsharmaait
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][higgins][all] The first Higgins team meeting

2016-05-12 Thread Ihor Dvoretskyi
Hongbin,

Will the activities within the project be related to container
orchestration systems (Kubernetes, Mesos, Swarm), or they will still live
in the world of Magnum?

Ihor

On Wed, May 11, 2016 at 8:33 AM Hongbin Lu  wrote:

> Hi all,
>
>
>
> I am happy to announce that a new project (Higgins [1][2]) was created for
> providing container service on OpenStack. The Higgins team will hold the
> first team meeting at this Friday 0030 UTC [3]. At the first meeting, we
> plan to collect requirements from interested individuals and drive
> consensus on the project roadmap. Everyone is welcome to join. I hope to
> see you all there.
>
>
>
> [1] https://review.openstack.org/#/c/313935/
>
> [2] https://wiki.openstack.org/wiki/Higgins
>
> [3] https://wiki.openstack.org/wiki/Higgins#Agenda_for_2016-05-13_0300_UTC
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Hongbin Lu
> *Sent:* May-03-16 11:31 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* RE: [openstack-dev] [magnum][all] Build unified abstraction
> for all COEs
>
>
>
> Hi all,
>
>
>
> According to the decision in the design summit [1], we are going to narrow
> the scope of the Magnum project [2]. In particular, Magnum will focus on
> COEs deployment and management. The efforts of building unified container
> abstraction will potentially go into a new project. My role here is to
> collect interests for the new project, help to create a new team (if there
> are enough interests), and then pass the responsibility to the new team. An
> etherpad was created for this purpose:
>
>
>
> https://etherpad.openstack.org/p/container-management-service
>
>
>
> If you interest in contributing and/or leveraging the new container
> service, I would request to have your name and requirements stated in the
> etherpad. Your inputs will be appreciated.
>
>
>
> [1] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
>
> [2] https://review.openstack.org/#/c/311476/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com
> ]
> *Sent:* April-23-16 11:27 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
>
>
> Magnum is not a COE installer. It offers multi tenancy from the ground up,
> is well integrated with OpenStack services, and more COE features
> pre-configured than you would get with an ordinary stock deployment. For
> example, magnum offers integration with keystone that allows developer
> self-service to get a native container service in a few minutes with the
> same ease as getting a database server from Trove. It allows cloud
> operators to set up the COE templates in a way that they can be used to fit
> policies of that particular cloud.
>
>
>
> Keeping a COE working with OpenStack requires expertise that the Magnum
> team has codified across multiple options.
>
> --
>
> Adrian
>
>
> On Apr 23, 2016, at 2:55 PM, Hongbin Lu  wrote:
>
> I am not necessary agree with the viewpoint below, but that is the
> majority viewpoints when I was trying to sell Magnum to them. There are
> people who interested in adopting Magnum, but they ran away after they
> figured out what Magnum actually offers is a COE deployment service. My
> takeaway is COE deployment is not the real pain, and there are several
> alternatives available (Heat, Ansible, Chef, Puppet, Juju, etc.). Limiting
> Magnum to be a COE deployment service might prolong the existing adoption
> problem.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com
> ]
> *Sent:* April-20-16 6:51 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
>
>
> If Magnum will be focused on installation and management for COE it will
> be unclear how much it is different from Heat and other generic
> orchestrations.  It looks like most of the current Magnum functionality is
> provided by Heat. Magnum focus on deployment will potentially lead to
> another Heat-like  API.
>
> Unless Magnum is really focused on containers its value will be minimal
> for OpenStack users who already use Heat/Orchestration.
>
>
>
>
>
> On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
> wrote:
>
> Magnum doesn¹t have to preclude tight integration for single COEs you
> speak of.  The heavy lifting of tight integration of the COE in to
> OpenStack (so that it performs optimally with the infra) can be modular
> (where the work is performed by plug-in models to Magnum, not performed by
> Magnum itself. The tight integration can be done by leveraging existing
> technologies (Heat and/or choose your DevOps tool of choice:
> Chef/Ansible/etc). This allows 

Re: [openstack-dev] [tacker]ad-hoc meeting of May 16

2016-05-12 Thread Qiming Teng
UTC 16:00 is too late for most of senlin developers to join.
Still interested in the outcomes from this discussion.

Regards,
  Qiming

On Thu, May 12, 2016 at 11:59:37PM +, Bruce Thompson (brucet) wrote:
> Hi,
> 
> IRC meeting: https://webchat.freenode.net/?channels=tacker on Monday 16 
> starting from UTC 16:00.
> 
> Agenda:
> # Tacker Ceilometer Monitoring Driver and Autoscaling Sync
> 
> 
> 
> Best Regards
> Bruce Thompson ( brucet )

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican]barbican-worker and barbican-keystone-listener not starting up

2016-05-12 Thread Akshay Kumar Sanghai
Hi,
I have a 4 node setup of openstack liberty. 1 controller , 2 compute node
and 1 network node. I did a apt-get barbican-api barbican-worker
barbican-keystone-listener and installed the components. The deployment is
pointing to the ubuntu liberty repository. The database used by all the
services is mariadb.

After modifying the barbican.conf, I did barbican-db-manage upgrade and
restarted the services.

These are the status of the services
root@controller1:~# service barbican-keystone-listener status
barbican-keystone-listener stop/waiting
root@controller1:~# service barbican-worker status
barbican-worker stop/waiting
root@controller1:~# service barbican-api status
barbican-api start/running, process 36754

This is a section of the barbican-api.log file

ImportError: No module named barbican.api.app
OOPS ! failed loading app in worker 1 (pid 6677) :( trying again...
worker respawning too fast !!! i have to sleep a bit (2 seconds)...
Respawned uWSGI worker 1 (new pid: 6748)

can you please help?

Thanks
Akshay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]PTL Candidacy for Tricircle in Newton release

2016-05-12 Thread joehuang
I am Chaoyi Huang, would like to propose my candidacy for the Tricircle
PTL in Newton.

I have been working on the Tricircle from day one, from the open source
development since June,2015, and even from the PoC of OpenStack
cascading solution.

My passion to work on Tricircle is that Tricircle tries to address one
area where OpenStack not touched but very important: if one tenant's
VMs are deployed in multiple OpenStack instances, how is the
traffic will be isolated at L2/L3 level if these VMs in same network or in
different network, and how to automate it.

Tricircle has been developed with a lightweight API gateway framework in 
Mitaka release under the effort of the team: Nova APIGW, Cinder APIGW, Neutron
Tricircle plugin with reused Neutron API server and Neutron DB. Nova APIGW is
the networking automation trigger for Nova APIGW is able to know exactly when
a new VM is being provisioned, and then Neutron Tricircle plug-in is
responsible for cross OpenStack L2/L3 networking immediately for the new
provisioned VM. Cinder APIGW and Nova APIGW will make sure the volumes for
the same VM will co-locate in same OpenStack instances. In Mitaka, only one
OpenStack in one Availability Zone is allowed, and network is limited to
present only in one bottom OpenStack instance, and cross OpenStack L3
networking is established through shared VLAN. Obviously, there is still
lots of things to do to achieve the goal of Tricircle: Tricircle provides
an OpenStack API gateway and networking automation to allow multiple OpenStack
instances, spanning in one site or multiple sites or in hybrid cloud, to be
managed as a single OpenStack cloud.

So the objectives which I wanted to achieve with the help of the Tricircle
team in Newton release are:

* Cross OpenStack L2 networking: this feature is fundamental to reach the goal
  of Tricircle. So this feature will be put on the first priority in Neuton
  release.  
* Dynamic pod binding: when a cloud was put into production, capacity
  expansion is inevitable, adding more and more already tested and
  verified OpenStack instances to the cloud is the easiest way of
  capacity expansion, this feature will also rely on the cross OpenStack
  L2 networking, for VM of tenant will be added to the same network after
  adding new OpenStack instance for capacity expansion. This feature will
  be on the second priority.
* Tempest and basic VM/Volume operation: to make Tricircle mature enough to
  work, the basic VM/Volume operation features which are lack in Tricircle
  should be added, and using tempest to guarantee the quality of Tricircle
  and comply to DefCore test. This is also quite important in Newton release.

And one more important one is:

* Keep Tricircle follow OpenStack open source development guideline to make
  Tricircle project as open as any other OpenStack project, so that more and
  more talents would like to join and contribute in Tricircle, and target at
  being a big tent project of OpenStack, being member of OpenStack eco-system
  and help the eco-system to address the problem domain in multi-OpenStack
  cloud, no matter it's in one site or multi-site.
  
Thanks for reading the mail.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: joehuang 
Sent: Thursday, May 05, 2016 9:44 AM
To: 'ski...@redhat.com'; OpenStack Development Mailing List (not for usage 
questions)
Subject: [openstack-dev][tricircle]PTL election of Tricircle for Newton release

Hello,

As discussed in yesterday weekly meeting, PTL nomination period from May 9 ~ 
May 13, election from May 16 ~ May 20 if more than one nomination . If you want 
to be the PTL for Newton release of Tricircle, please send your self nomination 
letter in the mail-list. You can refer to the nomination letter of other 
projects, for example, Kuryr[1], Glance[2], Neutron[3], others can also be 
found in [4]


[1]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Kuryr/Gal_Sagie.txt
[2]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Glance/Nikhil_Komawar.txt
[3]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Neutron/Armando_Migliaccio.txt
[4]https://wiki.openstack.org/wiki/PTL_Elections_March_2016

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
Sent: Wednesday, May 04, 2016 5:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Requirements for becoming approved 
official project

Hi Team,

There is an additional work to become an official (approval) project.
Once we complete PTL election with everyone's consensus, we need to update 
projects.yaml. [1] I think that the OSPF to become approval project is to elect 
PTL, then talk to other PTLs of other projects.

[1] https://github.com/openstack/governance/blob/master/reference/projects.yaml

Cheers,
Shinobu


On Mon, May 2, 2016 at 10:40 PM, joehuang 

Re: [openstack-dev] [openstack-operators] [glance] Austin summit summary: Rolling upgrades

2016-05-12 Thread Nikhil Komawar
Can we start some background research on this topic? This ties up very
well with deprecating registry and it would be nice if we can determine
if registry is useful or not especially if we use different types of
rolling upgrade schemes (direct oslo.vo, expand/contract, etc.)



On 5/5/16 2:06 AM, Nikhil Komawar wrote:
> Hello everyone,
>
> Just wanted to send a brief summary of the discussions at the summit.
> This list is not holistic however, it covers the relevant aspects that
> various stakeholders need to be aware of.
>
>   * The intent is that we want operators to be able to upgrade from one
> Glance release to the next with minimal (ideally, zero) downtime.
>
>   * Nova's been working on this, so there's a good example of how to
> accomplish this. Also, the product working group has a cross-project
> spec on this topic.
>
>   * Glance DB is the one component that would require some sophisticated
> logic to avoid the downtime. Other services are being handled by
> upgrade & swap mechanism.
>
>   * Some discussion was around relative comparison on what other
> services are doing:In terms of performance there's simple schemain
> Glance thoughcan have massive data.
>
>   *
> The different approaches today include: 
>
>   *
> oslo versioned objects
>
>   *
> neutron: expansion/contraction scheme; expand, lazy updates, force
> contract after some time
>
>   *
> ironic: not using versioned objects, pinning the version
>
>   *
> cinder: split across multiple releases - add table, code can handle both
>
>   * Besides these there was some discussion around best practices for
> upgrades: The preferred upgrade scheme is first DB, then registry
> and then API.
>
>   * There were some research items: documenting the draining protocol
> for Glance nodes handling uploads. Write up alternatives in the spec
> based on findings what other projects are achieving; sign-ups
> include mclaren for cinder, rosmaita for neutron, nikhil for nova.
> More volunteers are welcome.
>
> For more information please reach out to me on #openstack-glance, email,
> reply here etc.
>
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Nikhil Komawar
I have been of the same opinion as far as upgrades go.

I think we are stepping ahead of ourselves here a bit. We need to figure
out the rolling upgrade story first and see if registry is actually
useful or not there as well.

The feedback from operator sessions also indicated that some ops do use
it that way (
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094034.html ).

Overall, I do think registry is a bit of overhead and it would be nice
to actually deprecate it but we do need facts/technical research first.

On 5/12/16 9:20 PM, Sam Morrison wrote:
> We find glance registry quite useful. Have a central glance-registry api is 
> useful when you have multiple datacenters all with glance-apis and talking 
> back to a central registry service. I guess they could all talk back to the 
> central DB server but currently that would be over the public Internet for 
> us. Not really an issue, we can work around it.
>
> The major thing that the registry has given us has been rolling upgrades. We 
> have been able to upgrade our registry first then one by one upgrade our API 
> servers (we have about 15 glance-apis) 
>
> I don’t think we would’ve been able to do that if all the glance-apis were 
> talking to the DB, (At least not in glance’s current state)
>
> Sam
>
>
>
>
>> On 12 May 2016, at 1:51 PM, Flavio Percoco  wrote:
>>
>> Greetings,
>>
>> The Glance team is evaluating the needs and usefulness of the Glance Registry
>> service and this email is a request for feedback from the overall community
>> before the team moves forward with anything.
>>
>> Historically, there have been reasons to create this service. Some 
>> deployments
>> use it to hide database credentials from Glance public endpoints, others use 
>> it
>> for scaling purposes and others because v1 depends on it. This is a good time
>> for the team to re-evaluate the need of these services since v2 doesn't 
>> depend
>> on it.
>>
>> So, here's the big question:
>>
>> Why do you think this service should be kept around?
>>
>> Summit etherpad: 
>> https://etherpad.openstack.org/p/newton-glance-registry-deprecation
>>
>> Flavio
>> -- 
>> @flaper87
>> Flavio Percoco
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Sam Morrison
We find glance registry quite useful. Have a central glance-registry api is 
useful when you have multiple datacenters all with glance-apis and talking back 
to a central registry service. I guess they could all talk back to the central 
DB server but currently that would be over the public Internet for us. Not 
really an issue, we can work around it.

The major thing that the registry has given us has been rolling upgrades. We 
have been able to upgrade our registry first then one by one upgrade our API 
servers (we have about 15 glance-apis) 

I don’t think we would’ve been able to do that if all the glance-apis were 
talking to the DB, (At least not in glance’s current state)

Sam




> On 12 May 2016, at 1:51 PM, Flavio Percoco  wrote:
> 
> Greetings,
> 
> The Glance team is evaluating the needs and usefulness of the Glance Registry
> service and this email is a request for feedback from the overall community
> before the team moves forward with anything.
> 
> Historically, there have been reasons to create this service. Some deployments
> use it to hide database credentials from Glance public endpoints, others use 
> it
> for scaling purposes and others because v1 depends on it. This is a good time
> for the team to re-evaluate the need of these services since v2 doesn't depend
> on it.
> 
> So, here's the big question:
> 
> Why do you think this service should be kept around?
> 
> Summit etherpad: 
> https://etherpad.openstack.org/p/newton-glance-registry-deprecation
> 
> Flavio
> -- 
> @flaper87
> Flavio Percoco
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Jinja2 for Heat template

2016-05-12 Thread Yuanying OTSUKA
Hi,

My concern is that using option 1 is acceptable or not (because it’s not
implemented yet.)
So, first step, I’ll implement bp:bay-with-no-floating-ips
 using
option 1,
and option2 or 3 will be implemented later if supporting older versions are
needed. right?

(Sorry, I remember Tom was working for supporing jinja2, but I didn’t know
current status about it.


Thanks
- OTSUKA, Yuanying


2016年5月13日(金) 3:34 Cammann, Tom :

> I’m in broad agreement with Hongbin. Having tried a patch to use jinja2 in
> the templates, it certainly adds complexity. I am in favor of using
> conditionals and consuming the latest version of heat. If we intend to
> support older versions of OpenStack this should be a clearly defined goal
> and needs to be tested. An aspiration to work with older versions isn’t a
> good policy.
>
>
>
> I would like to understand a bit better the “chaos” option 3 would cause.
>
>
>
> Tom
>
>
>
> *From:* Hongbin Lu [mailto:hongbin...@huawei.com]
> *Sent:* 12 May 2016 16:35
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Jinja2 for Heat template
>
>
>
> We discussed the management of Heat templates several times. It seems the
> consensus is to leverage the *conditionals*feature from Heat (option #1).
> From the past discussion, it sounds like option #2 or #3 will significantly
> complicate our Heat templates, thus incurring burden on maintenance.
>
>
>
> However, I agree with Yuanying that option #1 will make Newton (or newer)
> version of Magnum incompatible with Mitaka (or older) version of OpenStack.
> A solution I can think of is to have a Jinja2 version of Heat template in
> the contrib folder, so that operators can swap the Heat templates if they
> want to run newer version of Magnum with older version of OpenStack.
> Thoughts.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Yuanying OTSUKA [mailto:yuany...@oeilvert.org
> ]
> *Sent:* May-12-16 6:02 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Jinja2 for Heat template
>
>
>
> Hi,
>
> Thanks for your helpful comment.
>
>
>
> I didn’t know about the pattern you suggested.
>
> We often want to “if” or “for” etc…
>
>
>
> For example,
>
> * if private network is supplied as parameter, disable creating network
> resource.
>
> * if https parameter is enable, tcp 6443 port should be opened instead of
> 8080 at“OS::Neutron::SecurityGroup".
>
> * if https parameter is enable, loadbalancing protocol should be TCP
> instead of HTTP
>
>
>
> and so on.
>
> So, I want to Jinja2 template to manage it.
>
>
>
> I’ll try to use the composition model above,
>
> and also test the limited use of jinja2 templating.
>
>
>
>
>
> Thanks
>
> - OTSUKA, Yuanying
>
>
>
>
>
>
>
> 2016年5月12日(木) 17:46 Steven Hardy :
>
> On Thu, May 12, 2016 at 11:08:02AM +0300, Pavlo Shchelokovskyy wrote:
> >Hi,
> >
> >not sure why 3 will bring chaos when implemented properly.
>
> I agree - heat is designed with composition in mind, and e.g in TripleO
> we're making heavy use of it for optional configurations and it works
> pretty well:
>
> http://docs.openstack.org/developer/heat/template_guide/composition.html
>
> https://www.youtube.com/watch?v=fw0JhywwA1E
>
>
> http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-1-roles-and.html
>
>
> https://github.com/openstack/tripleo-heat-templates/tree/master/environments
>
> >Can you abstract the "thing" (sorry, not quite familiar with Magnum)
> that
> >needs FP + FP itself into a custom resource/nested stack? Then you
> could
> >use single master template plus two environments (one with FP, one
> >without), and choose which one to use right where you have this logic
> >split in your code.
>
> Yes, this is exactly the model we make heavy use of in TripleO, it works
> pretty well.
>
> Note there's now an OS::Heat::None resource in heat, which makes it easy to
> conditionally disable things (without the need for a noop.yaml template
> that contains matching parameters):
>
>
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None
>
> So you'd have two environment files like:
>
> cat enable_floating.yaml:
> resource_registry:
>   OS::Magnum::FloatingIP: templates/the_floating_config.yaml
>
> cat disable_floating.yaml:
> resource_registry:
>   OS::Magnum::FloatingIP: OS::Heat::None
>
> Again, this pattern is well proven and works pretty well.
>
> Conditionals may provide an alternative way to do this, but at the expense
> of some additional complexity inside the templates.
>
> >Option 2 is not so bad either IMO (AFAIK Trove was doing that at
> sometime,
> >not sure of current status), but the above would be nicer.
>
> Yes, in the past[1] I've commented that the composition model above may be
> 

Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Steve Martinelli
On Thu, May 12, 2016 at 6:39 PM, gordon chung  wrote:

>
> can anyone confirm when we deprecated keystonev2? i see a bp[1] related
> to deprecation that was 'implemented' in 2013.
>
>
We did this last release, see the sixth bullet point here:

http://docs.openstack.org/releasenotes/keystone/mitaka.html#deprecation-notes

I also vocalized this on the regular, -dev, and -ops mailing list.


> i realise switching to v3 breaks many gates but it'd be good to at some
> point say it's not 'keystonev3 breaking the gate' but rather 'projectx
> is breaking the gate because they are using keystonev2 which was
> deprecated 4 cycles ago'. given the deprecation period allowed already,
> can we say "here's some help, fix/merge this by
> , or your gate will be broken until then"?
> (assuming all the above items by Raildo doesn't fix everything).
>
> [1] https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api
>
> cheers,
>
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Nikhil Komawar


On 5/12/16 8:44 PM, Jeremy Stanley wrote:
> On 2016-05-12 17:38:22 -0400 (-0400), Nikhil Komawar wrote:
>> On 5/12/16 8:35 AM, Jeremy Stanley wrote:
> [...]
>>> While the size I picked in item #2 at
>>> >> https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
>>>  >
>>> is not meant to be a strict limit, you may still want to take this
>>> as an opportunity to rotate out some of your less-active reviewers
>>> (if there are any).
>> Thanks for not being strict on it.
> It's also possible this is an indication that we put the recommended
> cap too low, and should revisit it. I'll bring it up with other VMT
> members. I sort of picked that number out of the air... it seemed
> reasonable based on a survey of the sizes of some other supported
> projects' -coresec teams, but that's certainly worth revisiting.

+1 on re-iterating on the number

>> I do however, want to make another proposal:
>>
>> Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
>> his substitute. As soon as Stuart is back and is ready to shoulder this
>> responsibility we should do the rotation.
> [...]
>
> This seems fine. It does make sense to not expose embargoed
> vulnerabilities to (even temporarily) inactive team members, as a
> matter of hygiene.

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Jeremy Stanley
On 2016-05-12 17:38:22 -0400 (-0400), Nikhil Komawar wrote:
> On 5/12/16 8:35 AM, Jeremy Stanley wrote:
[...]
> > While the size I picked in item #2 at
> >  > https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
> >  >
> > is not meant to be a strict limit, you may still want to take this
> > as an opportunity to rotate out some of your less-active reviewers
> > (if there are any).
> 
> Thanks for not being strict on it.

It's also possible this is an indication that we put the recommended
cap too low, and should revisit it. I'll bring it up with other VMT
members. I sort of picked that number out of the air... it seemed
reasonable based on a survey of the sizes of some other supported
projects' -coresec teams, but that's certainly worth revisiting.

> I do however, want to make another proposal:
> 
> Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
> his substitute. As soon as Stuart is back and is ready to shoulder this
> responsibility we should do the rotation.
[...]

This seems fine. It does make sense to not expose embargoed
vulnerabilities to (even temporarily) inactive team members, as a
matter of hygiene.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Nikhil Komawar
The copy-from case is only in v1.


For v2, I am having discussion with Doug, Morgan and Mike (TC) members
as well as Chris (interop engineer with foundation) to ensure that we
can actually support copy-from in v2 for end users. I will add you to
the review and you can chime in.


Thanks.


On 5/12/16 7:59 PM, Fox, Kevin M wrote:
> Is there a copy-from-url method that's not deprecated yet?
>
> The app catalog is still pointing users at the command line in v1 mode
>
> Thanks,
> Kevin
> 
> *From:* Matt Fischer [m...@mattfischer.com]
> *Sent:* Thursday, May 12, 2016 4:43 PM
> *To:* Flavio Percoco
> *Cc:* openstack-dev@lists.openstack.org;
> openstack-operat...@lists.openstack.org
> *Subject:* Re: [Openstack-operators] [glance] glance-registry
> deprecation: Request for feedback
>
>
> On May 11, 2016 10:03 PM, "Flavio Percoco"  > wrote:
> >
> > Greetings,
> >
> > The Glance team is evaluating the needs and usefulness of the Glance
> Registry
> > service and this email is a request for feedback from the overall
> community
> > before the team moves forward with anything.
> >
> > Historically, there have been reasons to create this service. Some
> deployments
> > use it to hide database credentials from Glance public endpoints,
> others use it
> > for scaling purposes and others because v1 depends on it. This is a
> good time
> > for the team to re-evaluate the need of these services since v2
> doesn't depend
> > on it.
> >
> > So, here's the big question:
> >
> > Why do you think this service should be kept around?
>
> I've not seen any responses so far so wanted to just say we have no
> use case for it. I assume this also explains the silence from the rest
> of the ops. +1 to remove.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker]ad-hoc meeting of May 16

2016-05-12 Thread Bruce Thompson (brucet)
Hi,

IRC meeting: https://webchat.freenode.net/?channels=tacker on Monday 16 
starting from UTC 16:00.

Agenda:
# Tacker Ceilometer Monitoring Driver and Autoscaling Sync



Best Regards
Bruce Thompson ( brucet )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Fox, Kevin M
Is there a copy-from-url method that's not deprecated yet?

The app catalog is still pointing users at the command line in v1 mode

Thanks,
Kevin

From: Matt Fischer [m...@mattfischer.com]
Sent: Thursday, May 12, 2016 4:43 PM
To: Flavio Percoco
Cc: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [glance] glance-registry deprecation: 
Request for feedback


On May 11, 2016 10:03 PM, "Flavio Percoco" 
> wrote:
>
> Greetings,
>
> The Glance team is evaluating the needs and usefulness of the Glance Registry
> service and this email is a request for feedback from the overall community
> before the team moves forward with anything.
>
> Historically, there have been reasons to create this service. Some deployments
> use it to hide database credentials from Glance public endpoints, others use 
> it
> for scaling purposes and others because v1 depends on it. This is a good time
> for the team to re-evaluate the need of these services since v2 doesn't depend
> on it.
>
> So, here's the big question:
>
> Why do you think this service should be kept around?

I've not seen any responses so far so wanted to just say we have no use case 
for it. I assume this also explains the silence from the rest of the ops. +1 to 
remove.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Matt Fischer
On May 11, 2016 10:03 PM, "Flavio Percoco"  wrote:
>
> Greetings,
>
> The Glance team is evaluating the needs and usefulness of the Glance
Registry
> service and this email is a request for feedback from the overall
community
> before the team moves forward with anything.
>
> Historically, there have been reasons to create this service. Some
deployments
> use it to hide database credentials from Glance public endpoints, others
use it
> for scaling purposes and others because v1 depends on it. This is a good
time
> for the team to re-evaluate the need of these services since v2 doesn't
depend
> on it.
>
> So, here's the big question:
>
> Why do you think this service should be kept around?

I've not seen any responses so far so wanted to just say we have no use
case for it. I assume this also explains the silence from the rest of the
ops. +1 to remove.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Port binding query

2016-05-12 Thread Mohammad Banikazemi

Yes, we want to use the same naming conventions when possible.
Submitted a fix here: https://review.openstack.org/#/c/315799/

Best,

Mohammad




From:   Antoni Segura Puimedon 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: Mohammad Banikazemi/Watson/IBM@IBMUS
Date:   05/12/2016 11:51 AM
Subject:Re: [openstack-dev] [kuryr] Port binding query





On Thu, May 12, 2016 at 4:50 PM, Neil Jerram  wrote:
  I'm trying Kuryr with networking-calico and think I've hit an unhelpful
  inconsistency. A Neutron port has 'id' and 'device_id' fields that are
  usually different. When Nova does VIF binding for a Neutron port, it
  generates the Linux device name from 'tap' + port['id']. But when Kuryr
  does VIF binding for a Neutron port, I think it generates the Linux
  device name from 'tap' + port['device_id'].

  Thoughts? Does that sound right, or have I misread the code and my logs?
  If it's correct, it marginally impacts the ability to use identical agent
  and Neutron driver/plugin code for the two cases (Nova and Kuryr).

I think we are supposed to behave like Nova, binding wise.

@Banix: Can you confirm that it is a bug and not a feature?

>From a quick grepping I see hat nova sets the name to be:

    nova/network/neutronv2/api.py:    devname = "tap" +
current_neutron_port['id']

Whereas in Kuryr we use the first 8 characters of the Docker endpoint id.


  Thanks,
      Neil


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 13 May

2016-05-12 Thread Lana Brindley
= 13 May 2016 =

Hi everyone,

Wow, what a busy week! I've been mainly focused on the Install Guide speciality 
team this week, with gathering interested participants, ensuring our specs are 
ready to be merged, and setting a new meeting time. I'm also pleased to say 
that our speciality team reports make a post-Summit comeback in this 
newsletter. 

Just another reminder that all projects should update their cross-project 
liaison for docs on the wiki here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation If you're 
one of the lucky people nominated by your PTL to be a docs CPL, then please do 
your very best to attend docs meetings in your favourite timezone to make sure 
we hear the voice of your project when we're making documentation decisions. 
Details of upcoming docs meetings are at the end of this newsletter every week. 

On a related note, it was refreshing to read the conversation from the US 
meeting this week, and the associated conversation on the mailing lists about 
developer contributions to documentation. We'd love to find out what it is that 
prevents you from contributing to the docs, and what the docs team can do to 
make things that little bit easier for you! Reach out to us either on the dev 
mailing list (with [docs] in the subject line), or on the docs mailing list at 
openstack-d...@lists.openstack.org. 

== Progress towards Newton ==

145 days to go!

Bugs closed so far: 71

Newton deliverables 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release. I will also do my best to ensure it's kept up to date 
for each newsletter.

The Ops and HA Guides now exist in openstack-manuals, and the old repos have 
now been set to read-only.

We also have the first patch in for the Install Guide 'cookie cutter' template, 
which is a great start!

== Speciality Team Reports ==

'''HA Guide: Bogdan Dobrelya'''
No report this week.

'''Install Guide: Lana Brindley'''
New meeting time proposed: https://review.openstack.org/#/c/314831/ Still need 
to merge final spec: https://review.openstack.org/#/c/310588/ If you're 
interested in helping out, add your name here: 
https://wiki.openstack.org/wiki/Documentation/InstallGuide#Team_members

'''Networking Guide: Edgar Magana'''
We are planning to resume the meeting next week.

'''Security Guide: Nathaniel Dillon'''
Summit Recap: https://etherpad.openstack.org/p/austin-docs-workgroup-security
Good conversations around API rate limiting, OSSN in-flight, SecGuide work 
being prep'd (Thanks to Luke Hinds for taking this on!)
Added Doc reviewer (Thanks Shilla and welcome!)
Will be focusing on Neutron security

'''User Guides: Joseph Robinson'''
No report this week.

'''Ops Guide: Shilla Saebi'''
Proposed architecture guide restructure: 
https://review.openstack.org/#/c/311998/ 
Ops guide session Etherpad from Austin: 
https://etherpad.openstack.org/p/AUS-ops-Docs-ops-guide
OpsGuide reorg: https://etherpad.openstack.org/p/PAO-ops-ops-guide-fixing
Newton Plans:
Review content of both guides, and delete anything out of date
Review architecture of both guides, and possibly combine
Ops Guide in openstack-manuals repo
Gather content from Ops internal documentation

'''API Guide: Anne Gentle'''
All but two services have someone working on landing a migration patch in the 
project's repo.
Read: Status on bugs and migration 
http://lists.openstack.org/pipermail/openstack-docs/2016-May/008624.html
Read: Summit session recap 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094472.html

'''Config/CLI Ref: Tomoyuki Kato'''
Discussing documenting auto generation of config options with Oslo team.
Dropped keystone command-line client from CLI reference.
Keystone CLI was removed in python-keystoneclient 3.0.0 release.

'''Training labs: Pranav Salunke, Roger Luethi'''
Working on adding new features like PXE boot.
Stabilizing current release and backends.
Figuring out the zip file generation and web site/page.
Chaging the meeting time to more CET/CEST friendly time.

'''Training Guides: Matjaz Pancur'''
No report this week.

'''Hypervisor Tuning Guide: Joe Topjian'''
No report this week.

'''UX/UI Guidelines: Michael Tullis, Stephen Ballard'''
No report this week.

== Site Stats ==

The interesting fact I'd like to share with you this week is that just over 25% 
of our viewers this month are new to the site.

== Doc team meeting ==

Next meetings:

The US meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-05-11

Next meetings:
APAC: Wednesday 18 May, 00:30 UTC
US: Wednesday 25 May, 19:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#13_May_2016

-- 
Lana Brindley
Technical Writer

Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Adam Young

On 05/12/2016 06:39 PM, gordon chung wrote:


On 12/05/2016 1:47 PM, Morgan Fainberg wrote:


On Thu, May 12, 2016 at 10:42 AM, Sean Dague > wrote:

 We just had to revert another v3 "fix" because it wasn't verified to
 work correctly in the gate - https://review.openstack.org/#/c/315631/

 While I realize project-config patches are harder to test, you can do so
 with a bogus devstack-gate change that has the same impact in some cases
 (like the case above).

 I think the important bit on moving forward is that every patch here
 which might be disruptive has some manual verification about it working
 posted in review by v3 team members before we approve them.

 I also think we need to largely stay non voting on the v3 only job until
 we're quite confident that the vast majority of things are flipped over
 (for instance there remains an issue in nova <=> ironic communication
 with v3 last time I looked). That allows us to fix things faster because
 we don't wedge some slice of the projects in a gate failure.

  -Sean

 On 05/12/2016 11:08 AM, Raildo Mascena wrote:
  > Hi folks,
  >
  > Although the Identity v2 API is deprecated as of Mitaka [1], some
  > services haven't implemented proper support to v3 yet. For instance,
  > we implemented a patch that made DevStack v3 by default that, when
  > merged, broke a lot of project gates in a few hours [2]. This
  > happened due to specific services incompatibility issues with
 Keystone
  > v3 API, such as hardcoded v2 usage, usage of removed
 keystoneclient CLI,
  > requesting v2 service tokens and the lack of keystoneauth session
 usage.
  >
  > To discuss those points, we did a cross-project work
  > session in the Newton Summit[3]. One point we are working on at this
  > momment is creating gates to ensure the main OpenStack services
  > can live without the Keystone v2 API. Those gates setup devstack with
  > only Identity v3 enabled and run the Tempest suite on this
 environment.
  >
  > We already did that for a few services, like Nova, Cinder, Glance,
  > Neutron, Swift. We are doing the same job for other services such
  > as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
  >
  > In addition, we are creating jobs to run functional tests for the
  > services on this identity v3-only environment[5]. Also, we have a
 couple
  > of other fronts that we are doing like removing some hardcoded v2
 usage
  > [6], implementing keystoneauth sessions support in clients and
 APIs [7].
  >
  > Our plan is to keep tackling as many items from the cross-project
  > session etherpad as we can, so we can achieve more confidence in
 moving
  > to a DevStack working v3-only, making sure everyone is prepared
 to work
  > with Keystone v3 API.
  >
  > Feedbacks and reviews are very appreciated.
  >
  > [1] https://review.openstack.org/#/c/251530/
  > [2] https://etherpad.openstack.org/p/v3-only-devstack
  > [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
  > [4]
 
https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
  > [5]
 https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
  > [6]
 https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
  > [7] https://review.openstack.org/#/q/topic:use-ksa
  >
  > Cheers,
  >
  > Raildo
  >
  >
  >


This  also comes back to the conversation at the summit. We need to
propose the timeline to turn over for V3 (regardless of
voting/non-voting today) so that it is possible to set the timeline that
is expected for everything to get fixed (and where we are
expecting/planning to stop reverting while focusing on fixing the
v3-only changes).

I am going to ask the Keystone team to set forth the timeline and commit
to getting the pieces in order so that we can make v3-only voting rather
than playing the propose/revert game we're currently doing. A proposed
timeline and gameplan will only help at this point.


can anyone confirm when we deprecated keystonev2? i see a bp[1] related
to deprecation that was 'implemented' in 2013.

i realise switching to v3 breaks many gates but it'd be good to at some
point say it's not 'keystonev3 breaking the gate' but rather 'projectx
is breaking the gate because they are using keystonev2 which was
deprecated 4 cycles ago'. given the deprecation period allowed already,
can we say "here's some help, fix/merge this by
, or your gate will be broken until then"?
(assuming all the above items by Raildo doesn't fix everything).


I'd like to say Ocata.


[1] https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api

cheers,





Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-12 Thread Nikhil Komawar
I realized that I missed one of your questions earlier. Response for
that inline.


On 5/12/16 4:58 PM, Nikhil Komawar wrote:
>
>
> On 5/12/16 4:33 PM, Doug Hellmann wrote:
>> Excerpts from Nikhil Komawar's message of 2016-05-12 15:40:05 -0400:
>>> Please find my response inline:
>>>
>>> On 5/12/16 1:10 PM, Doug Hellmann wrote:
 Excerpts from Thierry Carrez's message of 2016-05-12 13:37:32 +0200:
> Tim Bell wrote:
>> [...]
>> I think it will be really difficult to persuade the mainstream projects 
>> to adopt
>> a library if it is not part of Oslo. Developing a common library for 
>> quota
>> management outside the scope of the common library framework for 
>> OpenStack
>> does not seem to be encouraging the widespread use of delimiter.
>> [...]
> I agree it's hard enough to get Oslo libraries used across the board, I 
> don't think the task would be any easier with an external library.
>
> One case that would justify being external is if the library was 
> generally useful rather than OpenStack-specific: then the Oslo branding 
> might hinder its out-of-OpenStack adoption. But it doesn't feel like 
> that is the case here ?
>
 In the past we've tried to encourage folks creating very specially
 focused libraries for which areas where the existing Oslo team has no
 real experience, such as os-win, to set up their own team. The Oslo team
 doesn't have to own all libraries.
>>> Thanks for that pointer!
>>>
 On the other hand, in this case I think quota management fits in Oslo as
 well as messaging or policy do. We have a mechanism in place for managing
 sub-teams so signing up to work on quotas doesn't have to mean signing
 up to be oslo-core.
>>> Yes, I agree that this fits well into the cross-project consistency
>>> domain. And yes, thanks for proposing the sub-team strategy to move forward.
>>>
>>> However, this library currently doesn't exist. We are still identifying
>>> what we want to achieve as a part of this scope, there's a ton of
>>> discussions in progress and we are on the advent of finding concrete
>>> tasks for people to pick up (so no second commit yet). Even after having
>>> done something we do not know if that's is something which will work for
>>> all the projects -- basically I am trying to say quotas is a big domain
>>> and now we are starting (very) small. We need a concrete implementation
>>> and it's adoption in a couple of projects to even say that it is a
>>> successful cross project implementation.
>>>
>>> The last thing we want to worry about is more process, governance and an
>>> approach to too-standardize things when we do not even have anything in
>>> tree. I think it makes sense as a part of somewhere _all_ projects can
>>> adopt but not until it's ready to be adopted.
>> I'm not sure what processes you're talking about that might be a burden.
>> Can you elaborate?

Currently, I do not have facts but only hints (anticipations, expected
hick-ups). If you think that's not the case, do you think we can be done
with all the processes, governance, etc. and yet be able to come up with
a POC release (dunno something like 0.0.3) within next 5 weeks that can
be experimented upon in one/two of the projects POC patches? We are
looking at that timeline and not sure how long the governance and specs
will take (do we need oslo spec?, how big is the process to setup
sub-cores? , how do we involve more folks without them thinking of oslo
standards?, etc.).

My biggest concern is that this will be seen as something that is an
attempt to standardize things where we do not even have a standard but
want to create one. We wish to be agile in our workflow and do not care
where that exists.

>>
 The notion that we're going to try to have all projects depend on
 something we create but that we *don't* create as part of an official
 project is extremely confusing. Whether we make it part of Oslo or part
 of its own thing, I think we want this to be official.
>>> Yes, that exists as a notion but it's agreed upon in the ML thread [1]
>>> that it's not practical yet. The hardest thing to achieve is to get the
>>> quotas code right and for now we would please like to focus on that. We
>>> do want to worry about governance and adoption across domain
>>> (standardization) once we do have a standard.
>> If you go off in a corner and build something that doesn't fit any of
>> our community standards for code or API, how do you expect the adoption
>> process to work out?
>>
> But we are not working in a corner, we've a representative from cinder,
> someone from trove who is interested in long term quota goals, someone
> who has worked on nova (and then we are borrowing the gen_id concept of
> Jay from Nova), someone from Zaqar, a PWG representative who is also
> collaborating on UX study, someone who is interested from vmware's
> perspective. And then we are developing this on the 

Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread gordon chung


On 12/05/2016 1:47 PM, Morgan Fainberg wrote:
>
>
> On Thu, May 12, 2016 at 10:42 AM, Sean Dague  > wrote:
>
> We just had to revert another v3 "fix" because it wasn't verified to
> work correctly in the gate - https://review.openstack.org/#/c/315631/
>
> While I realize project-config patches are harder to test, you can do so
> with a bogus devstack-gate change that has the same impact in some cases
> (like the case above).
>
> I think the important bit on moving forward is that every patch here
> which might be disruptive has some manual verification about it working
> posted in review by v3 team members before we approve them.
>
> I also think we need to largely stay non voting on the v3 only job until
> we're quite confident that the vast majority of things are flipped over
> (for instance there remains an issue in nova <=> ironic communication
> with v3 last time I looked). That allows us to fix things faster because
> we don't wedge some slice of the projects in a gate failure.
>
>  -Sean
>
> On 05/12/2016 11:08 AM, Raildo Mascena wrote:
>  > Hi folks,
>  >
>  > Although the Identity v2 API is deprecated as of Mitaka [1], some
>  > services haven't implemented proper support to v3 yet. For instance,
>  > we implemented a patch that made DevStack v3 by default that, when
>  > merged, broke a lot of project gates in a few hours [2]. This
>  > happened due to specific services incompatibility issues with
> Keystone
>  > v3 API, such as hardcoded v2 usage, usage of removed
> keystoneclient CLI,
>  > requesting v2 service tokens and the lack of keystoneauth session
> usage.
>  >
>  > To discuss those points, we did a cross-project work
>  > session in the Newton Summit[3]. One point we are working on at this
>  > momment is creating gates to ensure the main OpenStack services
>  > can live without the Keystone v2 API. Those gates setup devstack with
>  > only Identity v3 enabled and run the Tempest suite on this
> environment.
>  >
>  > We already did that for a few services, like Nova, Cinder, Glance,
>  > Neutron, Swift. We are doing the same job for other services such
>  > as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
>  >
>  > In addition, we are creating jobs to run functional tests for the
>  > services on this identity v3-only environment[5]. Also, we have a
> couple
>  > of other fronts that we are doing like removing some hardcoded v2
> usage
>  > [6], implementing keystoneauth sessions support in clients and
> APIs [7].
>  >
>  > Our plan is to keep tackling as many items from the cross-project
>  > session etherpad as we can, so we can achieve more confidence in
> moving
>  > to a DevStack working v3-only, making sure everyone is prepared
> to work
>  > with Keystone v3 API.
>  >
>  > Feedbacks and reviews are very appreciated.
>  >
>  > [1] https://review.openstack.org/#/c/251530/
>  > [2] https://etherpad.openstack.org/p/v3-only-devstack
>  > [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
>  > [4]
> 
> https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
>  > [5]
> https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
>  > [6]
> https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
>  > [7] https://review.openstack.org/#/q/topic:use-ksa
>  >
>  > Cheers,
>  >
>  > Raildo
>  >
>  >
>  >
>
>
> This  also comes back to the conversation at the summit. We need to
> propose the timeline to turn over for V3 (regardless of
> voting/non-voting today) so that it is possible to set the timeline that
> is expected for everything to get fixed (and where we are
> expecting/planning to stop reverting while focusing on fixing the
> v3-only changes).
>
> I am going to ask the Keystone team to set forth the timeline and commit
> to getting the pieces in order so that we can make v3-only voting rather
> than playing the propose/revert game we're currently doing. A proposed
> timeline and gameplan will only help at this point.
>

can anyone confirm when we deprecated keystonev2? i see a bp[1] related 
to deprecation that was 'implemented' in 2013.

i realise switching to v3 breaks many gates but it'd be good to at some 
point say it's not 'keystonev3 breaking the gate' but rather 'projectx 
is breaking the gate because they are using keystonev2 which was 
deprecated 4 cycles ago'. given the deprecation period allowed already, 
can we say "here's some help, fix/merge this by 
, or your gate will be broken until then"? 
(assuming all the above items by Raildo doesn't fix everything).

[1] https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api

cheers,


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Flavio Percoco

On 12/05/16 18:21 -0400, Nikhil Komawar wrote:



On 5/12/16 6:19 PM, Nikhil Komawar wrote:



   On 5/12/16 6:04 PM, Flavio Percoco wrote:

   On 12/05/16 17:38 -0400, Nikhil Komawar wrote:

   Comments, alternate proposal inline.



   On 5/12/16 8:35 AM, Jeremy Stanley wrote:

   On 2016-05-11 23:39:58 -0400 (-0400), Nikhil Komawar wrote:

   I would like to propose adding add Brian to the team.

   [...]

   I'm thrilled to see Glance adding more security-minded
   reviewers for
   embargoed vulnerability reports! One thing to keep in mind
   though is
   that you need to keep the list of people with access to these
   relatively small; I see
   https://launchpad.net/~glance-coresec/+members has five members
   now.


   Thanks for raising this. Yes, we are worried about it too. But as
   you
   bring it up, it becomes even more important. A lot of Glancers time
   share with other projects and lack bandwidth to contribute fully to
   this
   responsibility. Currently, I do not know if anyone can be rotated
   out as
   we have had pretty good input from all the folks there.


   While the size I picked in item #2 at
   https://governance.openstack.org/reference/tags/
   vulnerability_managed.html#requirements >
   is not meant to be a strict limit, you may still want to take
   this
   as an opportunity to rotate out some of your less-active
   reviewers
   (if there are any).




   Thanks for not being strict on it.

   I do however, want to make another proposal:


   Since Stuart is our VMT liaison and he's on hiatus, can we add
   Brian as
   his substitute. As soon as Stuart is back and is ready to shoulder
   this
   responsibility we should do the rotation.

   Please vote +1, 0, -1.

   I will consider final votes by Thur May 19 2100 UTC.



   Can we ask Stuart if he's ok with us removing him from the coresec
   team? I think
   he won't have time for it and it'd be irresponsible from us to send VMT
   bugs to
   him at this point.



I just realized we both meant the same thing, my description wasn't too clear
though on what I meant as rotation.


Ah-ha! Gotcha! then +1 from me too :)




   Confirmation enqueue.


   Cheers,
   Flavio


  


   
__
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   --

   Thanks,
   Nikhil


--

Thanks,
Nikhil



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Adam Young

On 05/12/2016 01:47 PM, Morgan Fainberg wrote:



On Thu, May 12, 2016 at 10:42 AM, Sean Dague > wrote:


We just had to revert another v3 "fix" because it wasn't verified to
work correctly in the gate - https://review.openstack.org/#/c/315631/

While I realize project-config patches are harder to test, you can
do so
with a bogus devstack-gate change that has the same impact in some
cases
(like the case above).

I think the important bit on moving forward is that every patch here
which might be disruptive has some manual verification about it
working
posted in review by v3 team members before we approve them.

I also think we need to largely stay non voting on the v3 only job
until
we're quite confident that the vast majority of things are flipped
over
(for instance there remains an issue in nova <=> ironic communication
with v3 last time I looked). That allows us to fix things faster
because
we don't wedge some slice of the projects in a gate failure.

-Sean

On 05/12/2016 11:08 AM, Raildo Mascena wrote:
> Hi folks,
>
> Although the Identity v2 API is deprecated as of Mitaka [1], some
> services haven't implemented proper support to v3 yet. For instance,
> we implemented a patch that made DevStack v3 by default that, when
> merged, broke a lot of project gates in a few hours [2]. This
> happened due to specific services incompatibility issues with
Keystone
> v3 API, such as hardcoded v2 usage, usage of removed
keystoneclient CLI,
> requesting v2 service tokens and the lack of keystoneauth
session usage.
>
> To discuss those points, we did a cross-project work
> session in the Newton Summit[3]. One point we are working on at this
> momment is creating gates to ensure the main OpenStack services
> can live without the Keystone v2 API. Those gates setup devstack
with
> only Identity v3 enabled and run the Tempest suite on this
environment.
>
> We already did that for a few services, like Nova, Cinder, Glance,
> Neutron, Swift. We are doing the same job for other services such
> as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
>
> In addition, we are creating jobs to run functional tests for the
> services on this identity v3-only environment[5]. Also, we have
a couple
> of other fronts that we are doing like removing some hardcoded
v2 usage
> [6], implementing keystoneauth sessions support in clients and
APIs [7].
>
> Our plan is to keep tackling as many items from the cross-project
> session etherpad as we can, so we can achieve more confidence in
moving
> to a DevStack working v3-only, making sure everyone is prepared
to work
> with Keystone v3 API.
>
> Feedbacks and reviews are very appreciated.
>
> [1] https://review.openstack.org/#/c/251530/
> [2] https://etherpad.openstack.org/p/v3-only-devstack
> [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
> [4]

https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
> [5]
https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
> [6]
https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
> [7] https://review.openstack.org/#/q/topic:use-ksa
>
> Cheers,
>
> Raildo
>
>
>


This  also comes back to the conversation at the summit. We need to 
propose the timeline to turn over for V3 (regardless of 
voting/non-voting today) so that it is possible to set the timeline 
that is expected for everything to get fixed (and where we are 
expecting/planning to stop reverting while focusing on fixing the 
v3-only changes).


I am going to ask the Keystone team to set forth the timeline and 
commit to getting the pieces in order so that we can make v3-only 
voting rather than playing the propose/revert game we're currently 
doing. A proposed timeline and gameplan will only help at this point.


I would like to draw a line in the sand and say that it has to be there 
for Ocata. We should be working through the issues the Newton release, 
and have a firm "Ocata should expect to be run V3 only, with V2.0 an 
optional feature that can be enabled for backwards compatibility if 
required."



To me, Ocata is the finish line:  there have been a lot of features that 
Keystone has needed for a long time, and they are finally starting to 
come together.  V3 support, and all that it enables is on of the big 
ones, and we need to give people enough information to plan, and enough 
time to adjust.



V3 support is essential for the way that most people are deploying: User 
database coming from an external source like LDAP, service users stored 
locally.  We need to treat this as baseline, and make sure that the 
deployment 

Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Nikhil Komawar


On 5/12/16 6:19 PM, Nikhil Komawar wrote:
>
>
> On 5/12/16 6:04 PM, Flavio Percoco wrote:
>> On 12/05/16 17:38 -0400, Nikhil Komawar wrote:
>>> Comments, alternate proposal inline.
>>>
>>>
>>>
>>> On 5/12/16 8:35 AM, Jeremy Stanley wrote:
 On 2016-05-11 23:39:58 -0400 (-0400), Nikhil Komawar wrote:
> I would like to propose adding add Brian to the team.
 [...]

 I'm thrilled to see Glance adding more security-minded reviewers for
 embargoed vulnerability reports! One thing to keep in mind though is
 that you need to keep the list of people with access to these
 relatively small; I see
 https://launchpad.net/~glance-coresec/+members has five members now.
>>>
>>> Thanks for raising this. Yes, we are worried about it too. But as you
>>> bring it up, it becomes even more important. A lot of Glancers time
>>> share with other projects and lack bandwidth to contribute fully to
>>> this
>>> responsibility. Currently, I do not know if anyone can be rotated
>>> out as
>>> we have had pretty good input from all the folks there.
>>>
 While the size I picked in item #2 at
 >>> https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
 >
 is not meant to be a strict limit, you may still want to take this
 as an opportunity to rotate out some of your less-active reviewers
 (if there are any).


>>>
>>> Thanks for not being strict on it.
>>>
>>> I do however, want to make another proposal:
>>>
>>>
>>> Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
>>> his substitute. As soon as Stuart is back and is ready to shoulder this
>>> responsibility we should do the rotation.
>>>
>>> Please vote +1, 0, -1.
>>>
>>> I will consider final votes by Thur May 19 2100 UTC.
>>
>>
>> Can we ask Stuart if he's ok with us removing him from the coresec
>> team? I think
>> he won't have time for it and it'd be irresponsible from us to send
>> VMT bugs to
>> him at this point.
>>

I just realized we both meant the same thing, my description wasn't too
clear though on what I meant as rotation.

>
> Confirmation enqueue.
>
>> Cheers,
>> Flavio
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -- 
>
> Thanks,
> Nikhil

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Nikhil Komawar


On 5/12/16 6:04 PM, Flavio Percoco wrote:
> On 12/05/16 17:38 -0400, Nikhil Komawar wrote:
>> Comments, alternate proposal inline.
>>
>>
>>
>> On 5/12/16 8:35 AM, Jeremy Stanley wrote:
>>> On 2016-05-11 23:39:58 -0400 (-0400), Nikhil Komawar wrote:
 I would like to propose adding add Brian to the team.
>>> [...]
>>>
>>> I'm thrilled to see Glance adding more security-minded reviewers for
>>> embargoed vulnerability reports! One thing to keep in mind though is
>>> that you need to keep the list of people with access to these
>>> relatively small; I see
>>> https://launchpad.net/~glance-coresec/+members has five members now.
>>
>> Thanks for raising this. Yes, we are worried about it too. But as you
>> bring it up, it becomes even more important. A lot of Glancers time
>> share with other projects and lack bandwidth to contribute fully to this
>> responsibility. Currently, I do not know if anyone can be rotated out as
>> we have had pretty good input from all the folks there.
>>
>>> While the size I picked in item #2 at
>>> >> https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
>>> >
>>> is not meant to be a strict limit, you may still want to take this
>>> as an opportunity to rotate out some of your less-active reviewers
>>> (if there are any).
>>>
>>>
>>
>> Thanks for not being strict on it.
>>
>> I do however, want to make another proposal:
>>
>>
>> Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
>> his substitute. As soon as Stuart is back and is ready to shoulder this
>> responsibility we should do the rotation.
>>
>> Please vote +1, 0, -1.
>>
>> I will consider final votes by Thur May 19 2100 UTC.
>
>
> Can we ask Stuart if he's ok with us removing him from the coresec
> team? I think
> he won't have time for it and it'd be irresponsible from us to send
> VMT bugs to
> him at this point.
>

Confirmation enqueue.

> Cheers,
> Flavio
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Flavio Percoco

On 12/05/16 17:38 -0400, Nikhil Komawar wrote:

Comments, alternate proposal inline.



On 5/12/16 8:35 AM, Jeremy Stanley wrote:

On 2016-05-11 23:39:58 -0400 (-0400), Nikhil Komawar wrote:

I would like to propose adding add Brian to the team.

[...]

I'm thrilled to see Glance adding more security-minded reviewers for
embargoed vulnerability reports! One thing to keep in mind though is
that you need to keep the list of people with access to these
relatively small; I see
https://launchpad.net/~glance-coresec/+members has five members now.


Thanks for raising this. Yes, we are worried about it too. But as you
bring it up, it becomes even more important. A lot of Glancers time
share with other projects and lack bandwidth to contribute fully to this
responsibility. Currently, I do not know if anyone can be rotated out as
we have had pretty good input from all the folks there.


While the size I picked in item #2 at
https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
 >
is not meant to be a strict limit, you may still want to take this
as an opportunity to rotate out some of your less-active reviewers
(if there are any).




Thanks for not being strict on it.

I do however, want to make another proposal:


Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
his substitute. As soon as Stuart is back and is ready to shoulder this
responsibility we should do the rotation.

Please vote +1, 0, -1.

I will consider final votes by Thur May 19 2100 UTC.



Can we ask Stuart if he's ok with us removing him from the coresec team? I think
he won't have time for it and it'd be irresponsible from us to send VMT bugs to
him at this point.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-12 Thread Nikhil Komawar
Comments, alternate proposal inline.



On 5/12/16 8:35 AM, Jeremy Stanley wrote:
> On 2016-05-11 23:39:58 -0400 (-0400), Nikhil Komawar wrote:
>> I would like to propose adding add Brian to the team.
> [...]
>
> I'm thrilled to see Glance adding more security-minded reviewers for
> embargoed vulnerability reports! One thing to keep in mind though is
> that you need to keep the list of people with access to these
> relatively small; I see
> https://launchpad.net/~glance-coresec/+members has five members now.

Thanks for raising this. Yes, we are worried about it too. But as you
bring it up, it becomes even more important. A lot of Glancers time
share with other projects and lack bandwidth to contribute fully to this
responsibility. Currently, I do not know if anyone can be rotated out as
we have had pretty good input from all the folks there.

> While the size I picked in item #2 at
>  https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
>  >
> is not meant to be a strict limit, you may still want to take this
> as an opportunity to rotate out some of your less-active reviewers
> (if there are any).
>
>

Thanks for not being strict on it.

I do however, want to make another proposal:


Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
his substitute. As soon as Stuart is back and is ready to shoulder this
responsibility we should do the rotation.

Please vote +1, 0, -1.

I will consider final votes by Thur May 19 2100 UTC.

-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Focus for release week R-16 May 16-20

2016-05-12 Thread Nikhil Komawar
Hi Glancers,

As per Doug's email [1], for R-16 the core team (and everyone else
interested) is expected to focus on reviews for the following in the
list of priority order:

1. Nikhil to publish remnant summaries, priority and process related
commits to glance, glance-specs and ML (please review next week).

2. Start reviewing the priority specs first, that includes Import
Refactor Update spec, the Nova's Glance v2 support spec, the Community
Image sharing spec and the Documentation changes as they get published
(one by one). This is a lot of work with regards to addressing comments
in specs and updating docs so, we need to help them move forward as far
as possible.

3. Then focus on other specs, starting with full specs first and then
lite-specs. Raise bigger questions initially and give time for authors
to address them, we can focus on the grammar, syntax, bike-shedding on
names, etc. later. DO NOT invest time reviewing WIP specs or patches.
(Spec authors are expected to be prompt in updating their spec once
reviewed)

4. Look out for emails with tag [glance] and Focus for release week
 to see what is the priority for that week.

5. If you are working on anything else besides what is posted above, you
are highly encouraged to NOT detract reviewers, also you are expected to
start reviewing the above changes and look for ways to move them forward.


You can reach out to me for further questions.


[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094863.html

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-12 Thread Nikhil Komawar



On 5/12/16 4:33 PM, Doug Hellmann wrote:
> Excerpts from Nikhil Komawar's message of 2016-05-12 15:40:05 -0400:
>> Please find my response inline:
>>
>> On 5/12/16 1:10 PM, Doug Hellmann wrote:
>>> Excerpts from Thierry Carrez's message of 2016-05-12 13:37:32 +0200:
 Tim Bell wrote:
> [...]
> I think it will be really difficult to persuade the mainstream projects 
> to adopt
> a library if it is not part of Oslo. Developing a common library for quota
> management outside the scope of the common library framework for OpenStack
> does not seem to be encouraging the widespread use of delimiter.
> [...]
 I agree it's hard enough to get Oslo libraries used across the board, I 
 don't think the task would be any easier with an external library.

 One case that would justify being external is if the library was 
 generally useful rather than OpenStack-specific: then the Oslo branding 
 might hinder its out-of-OpenStack adoption. But it doesn't feel like 
 that is the case here ?

>>> In the past we've tried to encourage folks creating very specially
>>> focused libraries for which areas where the existing Oslo team has no
>>> real experience, such as os-win, to set up their own team. The Oslo team
>>> doesn't have to own all libraries.
>> Thanks for that pointer!
>>
>>> On the other hand, in this case I think quota management fits in Oslo as
>>> well as messaging or policy do. We have a mechanism in place for managing
>>> sub-teams so signing up to work on quotas doesn't have to mean signing
>>> up to be oslo-core.
>>
>> Yes, I agree that this fits well into the cross-project consistency
>> domain. And yes, thanks for proposing the sub-team strategy to move forward.
>>
>> However, this library currently doesn't exist. We are still identifying
>> what we want to achieve as a part of this scope, there's a ton of
>> discussions in progress and we are on the advent of finding concrete
>> tasks for people to pick up (so no second commit yet). Even after having
>> done something we do not know if that's is something which will work for
>> all the projects -- basically I am trying to say quotas is a big domain
>> and now we are starting (very) small. We need a concrete implementation
>> and it's adoption in a couple of projects to even say that it is a
>> successful cross project implementation.
>>
>> The last thing we want to worry about is more process, governance and an
>> approach to too-standardize things when we do not even have anything in
>> tree. I think it makes sense as a part of somewhere _all_ projects can
>> adopt but not until it's ready to be adopted.
> I'm not sure what processes you're talking about that might be a burden.
> Can you elaborate?
>
>>> The notion that we're going to try to have all projects depend on
>>> something we create but that we *don't* create as part of an official
>>> project is extremely confusing. Whether we make it part of Oslo or part
>>> of its own thing, I think we want this to be official.
>> Yes, that exists as a notion but it's agreed upon in the ML thread [1]
>> that it's not practical yet. The hardest thing to achieve is to get the
>> quotas code right and for now we would please like to focus on that. We
>> do want to worry about governance and adoption across domain
>> (standardization) once we do have a standard.
> If you go off in a corner and build something that doesn't fit any of
> our community standards for code or API, how do you expect the adoption
> process to work out?
>

But we are not working in a corner, we've a representative from cinder,
someone from trove who is interested in long term quota goals, someone
who has worked on nova (and then we are borrowing the gen_id concept of
Jay from Nova), someone from Zaqar, a PWG representative who is also
collaborating on UX study, someone who is interested from vmware's
perspective. And then we are developing this on the #openstack-irc
channel, ML, weekly meetings etc.

Once some code is published, we will reflect back on the cross project
meetings to where things are heading.

>> With that mind, please suggest the path of least resistance for moving
>> forward quickly on this. We have seen good interest in the library and
>> we want people to start working on things without having to wait on
>> decisions they are not interested in. We do, however, want to be
>> compliant with the governance rules so that it does not diverge too far.
>> Simply put, can we revisit the governance decision late in Newton?
>>
>>> Before pushing too hard to bring the new lib under the Oslo umbrella,
>>> I'd like to understand why folks might not want to do that. What "costs"
>>> are perceived?
>> Much appreciated!
>>
>>> Doug
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-05-12 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
I flubbed my description of what I had in mind - I was thinking of GitHub 
personal access tokens as a model, _not_ OAuth tokens. I believe the normal 
excuse is "inadequate caffeine".

From: dolph.math...@gmail.com 
Subject: Re: [openstack-dev] [horizon][keystone] Getting Auth Token from 
Horizon when using Federation

On Thu, May 12, 2016 at 8:10 AM Edmund Rhudy (BLOOMBERG/ 120 PARK) 
 wrote:

+1 on desiring OAuth-style tokens in Keystone.

OAuth 1.0a has been supported by keystone since the havana release, you just 
have to turn it on and use it:

  http://docs.openstack.org/developer/keystone/configuration.html#oauth1-1-0a 
-- 
-Dolph

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Ton Ngo
I would like to add some context for a bigger picture so we might arrive at
a more general solution.
Typically CLI options are fairly static so the help messages are usually
coded directly in the client.
This provides a good user experience by making the info readily available
to the users.
The case for label is a little different, because label provides a general
mechanism to pass arbitrary key/value pairs.
The meaning for these key/value is interpreted based on a particular option
in a particular COE type, so they are not generally applicable.
For instance,  Kubernetes baymodel that uses flannel as network-driver
supports the label "flannel_backend", with the allowed values of "udp,
vxlan, host-gw".
This particular label would be meaningless in another COE like Mesos.

So to provide this kind of info to the users, we have to address 2 issues:
   How the user can discover the available labels specific to the COE,
   along with the valid values to specify.
   How to maintain the help messages specific to each label since they are
   likely to change frequently.

I think just displaying the info for all labels together would not be too
helpful.
Putting the info in the API would make it available to tools built on
Magnum, but Madhuri has a good point that the info would not be available
when the server is not running.
We need to accommodate new bay drivers that may add new labels.
Validation for the label value is another requirement.

Here is a thought on how to meet these requirements:  we can install a yaml
file in /etc/magnum/labels.yaml that would describe all the supported
labels, something like:

flannel_backend:
   valid_values:
  -udp
  -vxlan
  -host-gw
   default_value:  udp
   COE:
  -kubernetes
  -swarm
   help_message: "This label is used with the flannel network driver to
specify the type of back end to use. The option host-gw gives the best
bandwidth performance."
   doc_url: "http://xxx;

Then the client can read this yaml file and list the labels for a COE, say
Kubernetes.  For help on a specific label, the client would print the help
message along with the url for further info (if available)
The validation code can also load this file for a more general way to
validate the baymodel, avoiding hardcoding in api/attr_validator.py
New bay drivers can just append new labels to this file to have them
handled in the same way as Magnum supported labels.
Optionally, the API server can provide access to this info.
In the source, we can keep this file in etc/magnum/labels.yaml.
Maintaining the help messages would be simpler since we just need to edit
the file.

Thought?

Ton,




From:   Jamie Hannaford 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: Qun XK Wang 
Date:   05/12/2016 07:08 AM
Subject:Re: [openstack-dev] [magnum] How to document 'labels'



+1 for 1 and 3.

I'm not sure maintainability should discourage us from exposing information
to the user through the client - we'll face the same maintenance burden as
we currently do, and IMO it's our job as a team to ensure our docs are
up-to-date. Any kind of input which touches the API should also live in the
API docs, because that's in line with every other OpenStack service.

I don't think I've seen documentation exposed via the API before (#2). I
think it's a lot of work too, and I don't see what benefit it provides.

Jamie




From: Hongbin Lu 
Sent: 11 May 2016 21:52
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap,
‘labels’ is a property in baymodel and is used by users to input additional
key-pair pairs to configure the bay. In the last team meeting, we discussed
what is the best way to document ‘labels’. In general, I heard three
options:
  1.   Place the documentation in Magnum CLI as help text (as
  Wangqun proposed [1][2]).
  2.   Place the documentation in Magnum server and expose them via
  the REST API. Then, have the CLI to load help text of individual
  properties from Magnum server.
  3.   Place the documentation in a documentation server (like
  developer.openstack.org/…), and add the doc link to the CLI help
  text.
For option #1, I think an advantage is that it is close to end-users, thus
providing a better user experience. In contrast, Tom Cammann pointed out a
disadvantage that the CLI help text might be easier to become out of date.
For option #2, it should work but incurs a lot of extra work. For option
#3, the disadvantage is the user experience (since user need to click the
link to see the documents) but it makes us easier to maintain. I am
thinking if it is possible to have a combination of #1 and #3. Thoughts?

[1] 

Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-12 Thread Doug Hellmann
Excerpts from Nikhil Komawar's message of 2016-05-12 15:40:05 -0400:
> Please find my response inline:
> 
> On 5/12/16 1:10 PM, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2016-05-12 13:37:32 +0200:
> >> Tim Bell wrote:
> >>> [...]
> >>> I think it will be really difficult to persuade the mainstream projects 
> >>> to adopt
> >>> a library if it is not part of Oslo. Developing a common library for quota
> >>> management outside the scope of the common library framework for OpenStack
> >>> does not seem to be encouraging the widespread use of delimiter.
> >>> [...]
> >> I agree it's hard enough to get Oslo libraries used across the board, I 
> >> don't think the task would be any easier with an external library.
> >>
> >> One case that would justify being external is if the library was 
> >> generally useful rather than OpenStack-specific: then the Oslo branding 
> >> might hinder its out-of-OpenStack adoption. But it doesn't feel like 
> >> that is the case here ?
> >>
> > In the past we've tried to encourage folks creating very specially
> > focused libraries for which areas where the existing Oslo team has no
> > real experience, such as os-win, to set up their own team. The Oslo team
> > doesn't have to own all libraries.
> 
> Thanks for that pointer!
> 
> > On the other hand, in this case I think quota management fits in Oslo as
> > well as messaging or policy do. We have a mechanism in place for managing
> > sub-teams so signing up to work on quotas doesn't have to mean signing
> > up to be oslo-core.
> 
> 
> Yes, I agree that this fits well into the cross-project consistency
> domain. And yes, thanks for proposing the sub-team strategy to move forward.
> 
> However, this library currently doesn't exist. We are still identifying
> what we want to achieve as a part of this scope, there's a ton of
> discussions in progress and we are on the advent of finding concrete
> tasks for people to pick up (so no second commit yet). Even after having
> done something we do not know if that's is something which will work for
> all the projects -- basically I am trying to say quotas is a big domain
> and now we are starting (very) small. We need a concrete implementation
> and it's adoption in a couple of projects to even say that it is a
> successful cross project implementation.
> 
> The last thing we want to worry about is more process, governance and an
> approach to too-standardize things when we do not even have anything in
> tree. I think it makes sense as a part of somewhere _all_ projects can
> adopt but not until it's ready to be adopted.

I'm not sure what processes you're talking about that might be a burden.
Can you elaborate?

> 
> > The notion that we're going to try to have all projects depend on
> > something we create but that we *don't* create as part of an official
> > project is extremely confusing. Whether we make it part of Oslo or part
> > of its own thing, I think we want this to be official.
> 
> Yes, that exists as a notion but it's agreed upon in the ML thread [1]
> that it's not practical yet. The hardest thing to achieve is to get the
> quotas code right and for now we would please like to focus on that. We
> do want to worry about governance and adoption across domain
> (standardization) once we do have a standard.

If you go off in a corner and build something that doesn't fit any of
our community standards for code or API, how do you expect the adoption
process to work out?

> 
> With that mind, please suggest the path of least resistance for moving
> forward quickly on this. We have seen good interest in the library and
> we want people to start working on things without having to wait on
> decisions they are not interested in. We do, however, want to be
> compliant with the governance rules so that it does not diverge too far.
> Simply put, can we revisit the governance decision late in Newton?
> 
> > Before pushing too hard to bring the new lib under the Oslo umbrella,
> > I'd like to understand why folks might not want to do that. What "costs"
> > are perceived?
> 
> Much appreciated!
> 
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/thread.html#89453
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-12 Thread Fausto Marzi
My take would be to select no more than 3 languages, according what they
are needed for,
then let the Service Team pick the best one right for what it needs to be
done.

Something like:

- Do you need more performance for this component in your service? OK, use
this.
- Do you need Web and alike? Ok, use that.
- General and Default: Python.
- Anything else? Then you can use this.

With this approach we would cover the needs of developers that have to
implement services and at the same time provide options for languages that
are better suited for related use cases.

All the gating, tests and alike will be focused on those selected languages.

This would also solve discussions in the future, as if for instance a new
language is proposed for performances, then we would already have a
solution for that. At the same time we would have an open approach to
alternatives, but only to those alternatives, not any and still be able to
keep control and focus.

Thanks,
Fausto


On Thu, May 12, 2016 at 9:04 PM, Robert Collins 
wrote:

> On 13 May 2016 at 00:01, Thierry Carrez  wrote:
> > Robert Collins wrote:
> >>
> >> [...]
> >> So, given that that is the model - why is language part of it? Yes,
> >> there are minimum overheads to having a given language in CI [...]
> >
> >
> > By "minimum" do you mean that they are someone else's problem ?
>
> No. I mean that there is an irreducible cost: there is some minimum
> and we need to account for that. (as in fact the end of that paragraph
> which you snipped, said).
>
> > There are economics at play here. Adding a language simplifies the work
> of
> > some and make more work for others. Obviously "some" see mostly benefits
> and
> > "others" see mostly drawbacks. You're just creating an externality[1].
>
> I'm not sure that allowing arbitrary languages would constitute an
> externality, properly structured - which is what I was trying to get
> at with my mail.
>
> > So rather than shifting workloads to someone else or pretending there is
> no
> > problem, let's take the time to calmly measure the cost, see what
> resources
> > we have lined up to address that cost, and make a decision from there.
>
> I proposed neither of those things (shifting, or pretence). Rebutting
> those positions is not rebutting my argument.
>
>  I agree with taking the time to assess things carefully, but in this
> discussion noone had taken a general 'pro' stance, so I decided, as an
> exercise, to write one up.
>
> However, assessing 'what we have to address that cost' is missing the
> actual point: we don't need to cover the cost /once/, we need to make
> how-its-covered, and who-covers-it be structured such that folk
> wanting to use $LANGUAGE provide the [human] resources needed to cover
> the incurred costs. An answer which says 'we can just afford to pay
> for go, but thats it', is an answer that has failed to actually tackle
> the underlying problem.
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Fuel - Rack Scale Architecture integration

2016-05-12 Thread Ramakrishna, Deepti
Hello Fuel team,

I am a software engineer working in the OpenStack team at Intel. You may have 
heard of Rack Scale Architecture [1] that Intel is pioneering. It is a new data 
center architecture that "simplifies resource management and provides the 
ability to dynamically compose resources based on workload-specific demands". 
It is supported by multiple industry partners.

We would like to propose Fuel integration with this. The first step would be UI 
integration [2] and we would like to have a tab similar to the VMWare tab 
(whose visibility is controlled by a config flag) that talks to the Redfish API 
[3] for displaying resources such as pods, racks, etc as exposed by this API. 
Note that Redfish API is an open industry standard API supported by multiple 
companies.

I plan to write up a blueprint/spec for the same, but I wanted to know if there 
is any immediate feedback on this idea before I even get started.

Thanks,
Deepti

[1] 
http://www.intel.com/content/www/us/en/architecture-and-technology/intel-rack-scale-architecture.html
[2] http://i.imgur.com/vLJIbwx.jpg
[3] https://www.dmtf.org/standards/redfish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] sharing bits between nodes during deployment

2016-05-12 Thread Adam Young

On 05/12/2016 02:20 PM, Emilien Macchi wrote:

Hi,

During the recent weeks, we've noticed that some features would have a
common challenge to solve:
How to share informations or files between nodes, during a multi-node
deployment.

A few use-cases:

* Deploying Keystone using Fernet tokens

Adam Young started this topic a few weeks ago, we are investigating
how to integrate Fernet in TripleO.
The main challenge is that we want to generate keys periodically for
security purposes.
In multi-node environment, when using HAproxy, you need to make sure
all Fernet keys are the same otherwise you expose the risk of an user
connected to a Keystone server that won't be able to validate Token.
We need a way to:
1) generate tokens periodically. It could be in puppet-keystone, we
already have a crontab example:
https://github.com/openstack/puppet-keystone/tree/master/manifests/cron
2) distribute the key from a node to other nodes <-- that is the challenge.
note: I confirmed with ayoung, and there is no need to restart
Keystone when we change a token.

* Scaling down/up Swift cluster

It's currently impossible to scale down/up a Swift cluster in TripleO
because the ring is built during deployment and never updated
afterwards. It makes Swift not really usable in production without
manual intervention.
Since we don't use a set of classes in puppet-swift that performs this
action because they require PuppetDB, we need to find a way to
redistribute the ring when we add or remove Swift nodes in a TripleO
Cloud.
Maybe we can investigate some Mistral actions or Heat, that would run
the swift commands to re-distribute the ring.

* Dynamic discovery

An example of use-case is: https://review.openstack.org/#/c/304125/
We want to manage Keystone resources for services (ie: nova endpoints,
etc) from the services roles (ie: nova-api), so we stop creating all
endpoints from the keystone profile role.
The current issue with composable services is that until now (tell me
if I'm wrong), keystone role is not aware if whether or not we run
gnocchi-api in our cloud so we don't know if we need to create the
endpoints etc.
On the review, please my (long) comment on Patch Set 12, where we
expose our current challenges.



Policy file distribution also ties in here, especially if we want to be 
able make different policy for different endpoints of the same service.





I hope by this thread that we can boostrap some discussions around
these challenges, because we'll keep having them with the complexity
of OpenStack deployments.
Feel free to comment / correct me / give any feedback on this initial
e-mail, thanks for reading so far.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-12 Thread Nikhil Komawar
Please find my response inline:


On 5/12/16 1:10 PM, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2016-05-12 13:37:32 +0200:
>> Tim Bell wrote:
>>> [...]
>>> I think it will be really difficult to persuade the mainstream projects to 
>>> adopt
>>> a library if it is not part of Oslo. Developing a common library for quota
>>> management outside the scope of the common library framework for OpenStack
>>> does not seem to be encouraging the widespread use of delimiter.
>>> [...]
>> I agree it's hard enough to get Oslo libraries used across the board, I 
>> don't think the task would be any easier with an external library.
>>
>> One case that would justify being external is if the library was 
>> generally useful rather than OpenStack-specific: then the Oslo branding 
>> might hinder its out-of-OpenStack adoption. But it doesn't feel like 
>> that is the case here ?
>>
> In the past we've tried to encourage folks creating very specially
> focused libraries for which areas where the existing Oslo team has no
> real experience, such as os-win, to set up their own team. The Oslo team
> doesn't have to own all libraries.

Thanks for that pointer!

> On the other hand, in this case I think quota management fits in Oslo as
> well as messaging or policy do. We have a mechanism in place for managing
> sub-teams so signing up to work on quotas doesn't have to mean signing
> up to be oslo-core.


Yes, I agree that this fits well into the cross-project consistency
domain. And yes, thanks for proposing the sub-team strategy to move forward.

However, this library currently doesn't exist. We are still identifying
what we want to achieve as a part of this scope, there's a ton of
discussions in progress and we are on the advent of finding concrete
tasks for people to pick up (so no second commit yet). Even after having
done something we do not know if that's is something which will work for
all the projects -- basically I am trying to say quotas is a big domain
and now we are starting (very) small. We need a concrete implementation
and it's adoption in a couple of projects to even say that it is a
successful cross project implementation.

The last thing we want to worry about is more process, governance and an
approach to too-standardize things when we do not even have anything in
tree. I think it makes sense as a part of somewhere _all_ projects can
adopt but not until it's ready to be adopted.

> The notion that we're going to try to have all projects depend on
> something we create but that we *don't* create as part of an official
> project is extremely confusing. Whether we make it part of Oslo or part
> of its own thing, I think we want this to be official.

Yes, that exists as a notion but it's agreed upon in the ML thread [1]
that it's not practical yet. The hardest thing to achieve is to get the
quotas code right and for now we would please like to focus on that. We
do want to worry about governance and adoption across domain
(standardization) once we do have a standard.

With that mind, please suggest the path of least resistance for moving
forward quickly on this. We have seen good interest in the library and
we want people to start working on things without having to wait on
decisions they are not interested in. We do, however, want to be
compliant with the governance rules so that it does not diverge too far.
Simply put, can we revisit the governance decision late in Newton?


> Before pushing too hard to bring the new lib under the Oslo umbrella,
> I'd like to understand why folks might not want to do that. What "costs"
> are perceived?

Much appreciated!

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/thread.html#89453

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-05-12 Thread Dolph Mathews
On Thu, May 12, 2016 at 8:10 AM Edmund Rhudy (BLOOMBERG/ 120 PARK) <
erh...@bloomberg.net> wrote:

> +1 on desiring OAuth-style tokens in Keystone.
>

OAuth 1.0a has been supported by keystone since the havana release, you
just have to turn it on and use it:


http://docs.openstack.org/developer/keystone/configuration.html#oauth1-1-0a
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api-ref sprint - Thursday Status - nova core reviewers needed

2016-05-12 Thread Sean Dague
On 05/09/2016 08:23 AM, Sean Dague wrote:
> There is a lot of work to be done to get the api-ref into a final state.
> 
> Review / fix existing patches -
> https://review.openstack.org/#/q/project:openstack/nova+file:api-ref+status:open
> shows patches not yet merged. Please review them, and if there are
> issues feel free to fix them.
> 
> Help create new API ref changes verifying some of the details -
> https://wiki.openstack.org/wiki/NovaAPIRef

We're still chugging along on content updates, current burndown has us
down to about 170 tags left: http://burndown.dague.org/

Things are starting to get backed up on reviewers now - 32 open reviews
at the moment. If you are nova core, review on these would be great, so
we can keep down the inflight reviews. We're starting to get duplication
of submissions because there is enough in flight:

Open reviews changing files: 32
 - https://review.openstack.org/310096 -
api-ref/source/os-hypervisors.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/310420 - api-ref/source/parameters.yaml,
api-ref/source/servers.inc
 - https://review.openstack.org/311071 - api-ref/source/os-floating-ips.inc
 - https://review.openstack.org/311727 - api-ref/source/parameters.yaml,
api-ref/source/servers-admin-action.inc
 - https://review.openstack.org/314101 - api-ref/source/extensions.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314133 - api-ref/source/flavors.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314268 - api-ref/source/images.inc
 - https://review.openstack.org/314320 - api-ref/source/ips.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314325 - api-ref/source/os-volumes.inc
 - https://review.openstack.org/314502 - api-ref/source/os-keypairs.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314566 -
api-ref/source/_static/api-site.css, api-ref/source/parameters.yaml,
api-ref/source/servers-action-crash-dump.inc
 - https://review.openstack.org/314776 - api-ref/source/parameters.yaml,
api-ref/source/servers-action-evacuate.inc
 - https://review.openstack.org/314796 - api-ref/source/os-certificates.inc
 - https://review.openstack.org/314833 -
api-ref/source/os-quota-sets.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/314932 -
api-ref/source/servers-admin-action.inc,
doc/api_samples/versions/v21-version-get-resp.json,
doc/api_samples/versions/versions-get-resp.json,
nova/api/openstack/api_version_request.py,
nova/api/openstack/compute/migrate_server.py,
nova/api/openstack/rest_api_version_history.rst, nova/conductor/manager.py
 - https://review.openstack.org/315126 -
api-ref/source/os-security-group-rules.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315145 -
api-ref/source/servers-action-deferred-delete.inc
 - https://review.openstack.org/315199 -
api-ref/source/os-floating-ips-bulk.inc
 - https://review.openstack.org/315212 - api-ref/source/index.rst,
api-ref/source/os-server-external-events.inc,
api-ref/source/os-services.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315216 - api-ref/source/limits.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/315220 - api-ref/source/servers-actions.inc
 - https://review.openstack.org/315252 - api-ref/source/ips.inc
 - https://review.openstack.org/315284 -
api-ref/source/os-floating-ip-dns.inc
 - https://review.openstack.org/315289 -
api-ref/source/os-volume-attachments.inc
 - https://review.openstack.org/315318 -
api-ref/source/os-interface.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315394 -
api-ref/source/os-server-groups.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315517 -
api-ref/source/os-aggregates.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315557 -
api-ref/source/os-security-group-default-rules.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/315572 - api-ref/source/parameters.yaml,
api-ref/source/servers-action-evacuate.inc,
api-ref/source/servers-admin-action.inc,
doc/api_samples/os-evacuate/v2.27/server-evacuate-req.json,
doc/api_samples/os-migrate-server/v2.27/live-migrate-server.json,
doc/api_samples/versions/v21-version-get-resp.json,
doc/api_samples/versions/versions-get-resp.json,
nova/api/openstack/api_version_request.py,
nova/api/openstack/compute/evacuate.py,
nova/api/openstack/compute/migrate_server.py,
nova/api/openstack/compute/schemas/evacuate.py,
nova/api/openstack/compute/schemas/migrate_server.py,
nova/api/openstack/rest_api_version_history.rst, nova/compute/api.py,
nova/tests/functional/api_sample_tests/api_samples/os-evacuate/v2.27/server-evacuate-req.json.tpl,
nova/tests/functional/api_sample_tests/api_samples/os-migrate-server/v2.27/live-migrate-server.json.tpl,
nova/tests/functional/api_sample_tests/test_evacuate.py,
nova/tests/functional/api_sample_tests/test_migrate_server.py,

Re: [openstack-dev] [Neutron] Getting rid of lazy init for engine facade

2016-05-12 Thread Anna Kamyshnikova
Roman, thanks a lot for guidelines! I've updated the change and removed
configure_db parameter.

On Wed, May 11, 2016 at 4:58 PM, Roman Podoliaka 
wrote:

> Hi Anna,
>
> Thank you for working on this in Neutron!
>
> EngineFacade is initialized lazily internally - you don't have to do
> anything for that in Neutron (you *had to* with "old" EngineFacade -
> this is the boiler plate your patch removes).
>
> I believe, you should be able to call configure(...) unconditionally
> as soon as you have parsed the config files. Why do you want to
> introduce a new conditional?
>
> Moreover, if you only have connections to one database (unlike Nova,
> which also has Cells databases), you don't need to call configure() at
> all - EngineFacade will read the values of config options registered
> by oslo.db on the first attempt to get a session / connection.
>
> Thanks,
> Roman
>
> On Wed, May 11, 2016 at 4:41 PM, Anna Kamyshnikova
>  wrote:
> > Hi guys!
> >
> > I'm working on adoption of new engine facade from oslo.db for Neutron
> [1].
> > This work requires us to get rid of lazy init for engine facade. [2] I
> > propose change [3] that adds configure_db parameter which is False by
> > default, so if work with db will be required configure_db=True should be
> > passed manually.
> >
> > NOTE: this will affect all external repos depending on Neutron!
> >
> > I'm considering making this argument mandatory to force every project
> > depending on this function explicitly make a decision there.
> >
> > I want to encourage reviewers to take a look at this change and l'm
> looking
> > forward all suggestions.
> >
> > [1] - https://bugs.launchpad.net/neutron/+bug/1520719
> > [2] -
> >
> http://specs.openstack.org/openstack/oslo-specs/specs/kilo/make-enginefacade-a-facade.html
> > [3] - https://review.openstack.org/#/c/312393/
> >
> > --
> > Regards,
> > Ann Kamyshnikova
> > Mirantis, Inc
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-12 Thread Robert Collins
On 13 May 2016 at 00:01, Thierry Carrez  wrote:
> Robert Collins wrote:
>>
>> [...]
>> So, given that that is the model - why is language part of it? Yes,
>> there are minimum overheads to having a given language in CI [...]
>
>
> By "minimum" do you mean that they are someone else's problem ?

No. I mean that there is an irreducible cost: there is some minimum
and we need to account for that. (as in fact the end of that paragraph
which you snipped, said).

> There are economics at play here. Adding a language simplifies the work of
> some and make more work for others. Obviously "some" see mostly benefits and
> "others" see mostly drawbacks. You're just creating an externality[1].

I'm not sure that allowing arbitrary languages would constitute an
externality, properly structured - which is what I was trying to get
at with my mail.

> So rather than shifting workloads to someone else or pretending there is no
> problem, let's take the time to calmly measure the cost, see what resources
> we have lined up to address that cost, and make a decision from there.

I proposed neither of those things (shifting, or pretence). Rebutting
those positions is not rebutting my argument.

 I agree with taking the time to assess things carefully, but in this
discussion noone had taken a general 'pro' stance, so I decided, as an
exercise, to write one up.

However, assessing 'what we have to address that cost' is missing the
actual point: we don't need to cover the cost /once/, we need to make
how-its-covered, and who-covers-it be structured such that folk
wanting to use $LANGUAGE provide the [human] resources needed to cover
the incurred costs. An answer which says 'we can just afford to pay
for go, but thats it', is an answer that has failed to actually tackle
the underlying problem.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ansible inventory - Would you need another inventory system for your openstack cloud and how would you like it to be?

2016-05-12 Thread Monty Taylor
On 05/12/2016 12:25 PM, Sean M. Collins wrote:
> Monty Taylor wrote:
>> On 05/09/2016 09:27 AM, Jean-Philippe Evrard wrote:
>>> Hello everyone,
>>>
>>> I am using ansible for some time now, and I think the current default
>>> Ansible inventory system is lacking a few features, at least when
>>> working on an OpenStack cloud - whether it's for its deployment, its
>>> support or its usage.
>>> I'm thinking of developing a new inventory, based on (openstack-)ansible
>>> experience.
>>
>> Before we talk about making a new OpenStack Inventory for ansible, let's
>> work on fixing the existing one. We just replaced the nova.py dynamic
>> inventory plugin in the last year with the new openstack.py system.
> 
> Interesting - I'd like to know more. A quick find / grip didn't help me
> find anything, can you help?

Absolutely!


https://github.com/ansible/ansible/blob/devel/contrib/inventory/openstack.py

Is the OpenStack dynamic inventory itself. If you copy that file to
somewhere, make it executable, and point to it as your inventory, it
will behave as an inventory for all of the clouds and regions of those
clouds you have defined in your clouds.yaml file.

There are a few additional clouds.yaml settings you might want to add:

ansible:
  use_hostnames: True
cache:
  expiration_time: 86400
  path: /var/cache/ansible-inventory

the use_hostnames: True is an option you always want, but is not default
behavior because setting it as the default would be a behavior change
and we try VERY hard not to change behavior. It lists your hosts in the
inventory by name, rather than by UUID

The cache lines have it cache your inventory so that it's not doing the
actual list_servers() call every time you ansible.

It's worth noting that all of the information that nova provides - and
some additional information that shade digs out of things - is attached
to each server as an ansible dict variable called "openstack"

It will also great a lot of groups for you, based on nova metadata
values "group" and "groups" as well as ones for cloud, region, az, image
and flavor.

If you run the script by hand like this:

openstack.py --list

It'll spit the inventory info to stdout so that you can look at it.

There is an exceptional doc from Catalyst:

http://docs.catalystcloud.io/tutorials/ansible-openstack-dynamic-inventory.html

Most of the real work for the inventory is actually done in the shade
library - so fleshing out things for it can be done in gerrit should you
find things that would make your life better.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [congress] Vitrage-Congress Collaboration

2016-05-12 Thread Tim Hinrichs
Hi Alexey,

That sounds like a good use case.  If I understand it correctly, the use
case can be accomplished by doing two things:

1. Writing a Vitrage datasource driver that whenever it receives an alarm,
pulls Vitrage's API.  I'd recommend having the datasource driver pulling
periodically, and a push simply requests an immediate refresh of that
data.  That way, the data is available as soon as you create the datasource
driver.

That would mean your datasource driver inherits from both the
PushedDataSourceDriver and PollingDataSourceDriver classes.   Pulling data
has been in Congress since day 1, and all but one of our drivers pull data,
so there are plenty of examples in congress/datasources.   Push is new
functionality that Masahito added in Mitaka.

Here are the docs on writing a datasource driver.  I'd start with pulling
the data from Vitrage and adding the push functionality once that is
working (using the request_refresh() method from the PushedDataSourceDriver
base class to force the poller to immediately pull the data).

http://docs.openstack.org/developer/congress/cloudservices.html

2. Writing a policy that uses the information about the host alarm that the
datasource driver pulled.  Below are 2 examples, where I'm assuming that
the vitrage datasource driver creates a table called host_alarm within
Congress.

- reactive policy like "whenever the host alarm is ON, pause all the VMs on
that host".
   Something like:
   execute[nova:pause(vm)] :- vitrage:host_alarm(host), nova:servers(id=vm,
hostId=host)

- monitoring policy: "whenever the host alarm is ON, all the VMs on that
host belong to the violation table"
   violation(vm) :- vitrage:host_alarm(host), nova:servers(id=vm,
hostId=host)

Here are the docs on writing such policies.
Basic policy:
http://docs.openstack.org/developer/congress/policy.html

Reactive policy:
http://docs.openstack.org/developer/congress/enforcement.html#manual-reactive-enforcement

Tim



On Thu, May 12, 2016 at 1:45 AM Masahito MUROI 
wrote:

> Hi Alexey,
>
> Thanks for clarified! I understood your use-case.
>
> Anyway, as Tim mentioned before, implementing Vitrage driver seems to be
> a good first step to integrate both.
>
> best regards,
> Masahito
>
> On 2016/05/10 20:00, Weyl, Alexey (Nokia - IL) wrote:
> > Hi Masahito,
> >
> > In addition, I wanted to add that the reason Congress needs to get the
> data from Vitrage by a pushing mechanism and not via polling, is so there
> won't be a delay from when the event occurs and when Congress receives it.
> Using polling, it will take a number of seconds (the polling interval time,
> 30 seconds by default) until Congress will receive the data.
> >
> > The reason of course why we need it, is to make the whole process work
> much faster, and be consistent with other projects such as OPNFV Doctor
> (that wants events to happen in less than 1 second).
> >
> > Alexey
> >
> >> -Original Message-
> >> From: Weyl, Alexey (Nokia - IL) [mailto:alexey.w...@nokia.com]
> >> Sent: Tuesday, May 10, 2016 1:45 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [vitrage] [congress] Vitrage-Congress
> >> Collaboration
> >>
> >> Hi Masahito,
> >>
> >> Thanks for your question.
> >>
> >> There are two main reasons why we need to get alarms from Vitrage
> >> initially.
> >>
> >> First, there are alarms that Vitrage generates ("deduced alarms") based
> >> on its user-defined templates and topology. Also, there are alarms that
> >> come from external sources outside of OpenStack, which Aodh and other
> >> projects do not hold. This information could also be valuable for
> >> Congress regardless of the RCA functionality.
> >>
> >> Second, since Vitrage retrieves alarms from multiple sources, the RCA
> >> API takes as input the Vitrage-Id of the alarm. To know what that ID
> >> is, you will need to first get the Alarms from Vitrage.
> >>
> >> Does this make sense? Would there be a different flow you think could
> >> work?
> >>
> >> Best Regards,
> >> Alexey
> >>
> >>> -Original Message-
> >>> From: Masahito MUROI [mailto:muroi.masah...@lab.ntt.co.jp]
> >>> Sent: Tuesday, May 10, 2016 11:00 AM
> >>> To: openstack-dev@lists.openstack.org
> >>> Subject: Re: [openstack-dev] [vitrage] [congress] Vitrage-Congress
> >>> Collaboration
> >>>
> >>> Hi Alexey,
> >>>
> >>> This use case sounds interesting. To be clarified it, I have a
> >>> question.
> >>>
> >>> On 2016/05/10 0:17, Weyl, Alexey (Nokia - IL) wrote:
>  Hi Tim,
> 
>  I agree – creating a datasource from Vitrage to Congress is the
>  first step, and we should have some concrete use case in mind to
>  help guide this process.
> 
>  The most straightforward use case I would suggest is when there is
> >> a
>  problem on an instance that is caused by some problem on the
>  physical host. Then:
> 
>  ·Vitrage will notify about an alarm on the instance, 

[openstack-dev] [heat] [grenade] need for a testing phase?

2016-05-12 Thread Sean Dague
I just discovered this -
https://github.com/openstack/heat/blob/fed92fdd6e5a14ea658621375e528f1c0cbde944/devstack/upgrade/resources.sh#L41
in looking at why - https://review.openstack.org/#/c/315501/ did not
pass on gate-grenade-dsvm-heat-nv

Overloading resource create with a base smoke test is definitely not the
way you want to use that interface (especially as we just changed things
so there is a phase before that that might muck you up).

It seems like the problem at hand is loading a different bit of
validation code at the end of the base / target phase.

Would a validate.sh (base|target) work for you all here?

The above patch to grenade will be merged soon to unblock ironic's
upgrade efforts. I'm happy to carve off another interface here if this
is the root cause of the failure, though I'm not hugely sure why that
would be. Please let me know.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-05-12 Thread Adam Young

On 05/12/2016 09:07 AM, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
+1 on desiring OAuth-style tokens in Keystone. The use cases that come 
up here are people wanting to be able to execute jobs that use the 
APIs (Jenkins, Terraform, Vagrant, etc.) without having to save their 
personal credentials in plaintext somewhere, and also wanting to be 
able to associate credentials with a project instead of a specific 
person, so that if a person leaves or rotates their password it 
doesn't blow up their team's carefully crafted automation.
We can sort of work around it with LDAP service accounts as mentioned 
previously, but the concern around those is the lack of speedy 
revocability in the event of a compromise, and the service accounts 
could possibly be used to get to non-OpenStack places until they get 
shut down. One thought I had to try to keep the auth domain 
constrained to only OpenStack was using the EC2 API because at least 
that means you're not saving LDAP passwords on disk and the access 
keys are useless beyond that particular Keystone installation, but you 
run into impedance mismatches between the Nova API and AWS EC2 API, 
and we'd like people to use the native OpenStack APIs. (Turns out the 
notion of using AWS's EC2 API to talk to a private cloud is strange to 
people not steeped in cloudy things.)
So Service accounts and OAuth consumers are two different names for the 
same abstract construct. IN both cases, the important thing is limiting 
the access each one has.



Horizon is for the interactive use case, though, and should not be using 
either, except as a front to define workflows, and in that case, the 
same work should be possible from the command line.


ECP should make that possible, assuming your IdP supports ECP (EIEIO!).




From: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [horizon][keystone] Getting Auth Token 
from Horizon when using Federation


Hi Dolph, On Mon, 2016-04-18 at 17:50 -0500, Dolph Mathews wrote:
> > On Mon, Apr 18, 2016 at 11:34 AM, Martin Millnert
> > wrote: > Hi, >
> we're deploying Liberty (soon Mitaka) with heavy reliance on >
the SAML2 > Federation system by Keystone where we're a Service
Provider > (SP). > > The problem in this situation is getting a
token for direct > API > access.(*) > > There are conceptually two
methods to use the CLI: > 1) Modify ones (each customer -- in our
case O(100)) IdP to > add support > for a feature called ECP(**),
and then use keystoneauth with > SAML2 > plugin, > 2) Go to (for
example) "Access & Security / API Access / View > Credentials" in
Horizon, and check out a token from there. > > > With a default
configuration, this token would only last a short > period of
time, so this would be incredibly repetitive (and thus > tedious). 

Assuming all that is setup, the user should be unaware of the re-init to 
the SAML IdP to get a new assertion for a new token. Why is this a problem?




Indeed. > So, I assume you mean some sort of long-lived API
tokens? Right. > API tokens, including keystone's UUID, PKI, PKIZ,
and Fernet tokens > are all bearer tokens, so we force a short
lifetime by default, > because there are always multiple parties
capable of compromising the > integrity of a token. OAuth would be
a counter example, where OAuth > access tokens can (theoretically)
live forever. 



Still think that is a security violation.



This does sound very interesting. As long as the end user gets
something useful to plug into the openstack auth libraries/APIs,
we're home free (modulo security considerations, etc). > 2) isn't
implemented. 1) is a complete blocker for many > customers. > >
Are there any principal and fundamental reasons why 2 is not >
doable? > What I imagine needs to happen: > A) User is
authenticated (see *) in Horizon, > B) User uses said
authentication (token) to request another > token from > Keystone,
which is displayed under the "API Access" tab on > "Access & >
Security". > > > The (token) here could be an OAuth access token.
Will look into this (also as per our discussion in Austin). The
one issue that has appeared in our continued discussions at home,
is the contrast against "service user accounts", that seems a
relatively prevalent/common among deployers today, which basically
use username/password as the api key credentials, e.g. the authZ
of the issued token: If AdminNameless is Domain Admin in their
domain, won't their OAuth access token yield keystone tokens with
the same authZ as they otherwise have? My presumptive answer being
'yes', brought me to the realization that, if one wants to avoid
going the way of "service user accounts" but still reduce authZ,
one would like to be able to get OAuth access tokens for a
specific project, with a specific role (e.g. "user", 

Re: [openstack-dev] [magnum] Jinja2 for Heat template

2016-05-12 Thread Cammann, Tom
I’m in broad agreement with Hongbin. Having tried a patch to use jinja2 in the 
templates, it certainly adds complexity. I am in favor of using conditionals 
and consuming the latest version of heat. If we intend to support older 
versions of OpenStack this should be a clearly defined goal and needs to be 
tested. An aspiration to work with older versions isn’t a good policy.

I would like to understand a bit better the “chaos” option 3 would cause.

Tom

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: 12 May 2016 16:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Jinja2 for Heat template

We discussed the management of Heat templates several times. It seems the 
consensus is to leverage the *conditionals*feature from Heat (option #1). From 
the past discussion, it sounds like option #2 or #3 will significantly 
complicate our Heat templates, thus incurring burden on maintenance.

However, I agree with Yuanying that option #1 will make Newton (or newer) 
version of Magnum incompatible with Mitaka (or older) version of OpenStack. A 
solution I can think of is to have a Jinja2 version of Heat template in the 
contrib folder, so that operators can swap the Heat templates if they want to 
run newer version of Magnum with older version of OpenStack. Thoughts.

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: May-12-16 6:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Jinja2 for Heat template

Hi,
Thanks for your helpful comment.

I didn’t know about the pattern you suggested.
We often want to “if” or “for” etc…

For example,
* if private network is supplied as parameter, disable creating network 
resource.
* if https parameter is enable, tcp 6443 port should be opened instead of 8080 
at“OS::Neutron::SecurityGroup".
* if https parameter is enable, loadbalancing protocol should be TCP instead of 
HTTP

and so on.
So, I want to Jinja2 template to manage it.

I’ll try to use the composition model above,
and also test the limited use of jinja2 templating.


Thanks
- OTSUKA, Yuanying



2016年5月12日(木) 17:46 Steven Hardy >:
On Thu, May 12, 2016 at 11:08:02AM +0300, Pavlo Shchelokovskyy wrote:
>Hi,
>
>not sure why 3 will bring chaos when implemented properly.

I agree - heat is designed with composition in mind, and e.g in TripleO
we're making heavy use of it for optional configurations and it works
pretty well:

http://docs.openstack.org/developer/heat/template_guide/composition.html

https://www.youtube.com/watch?v=fw0JhywwA1E

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-1-roles-and.html

https://github.com/openstack/tripleo-heat-templates/tree/master/environments

>Can you abstract the "thing" (sorry, not quite familiar with Magnum) that
>needs FP + FP itself into a custom resource/nested stack? Then you could
>use single master template plus two environments (one with FP, one
>without), and choose which one to use right where you have this logic
>split in your code.

Yes, this is exactly the model we make heavy use of in TripleO, it works
pretty well.

Note there's now an OS::Heat::None resource in heat, which makes it easy to
conditionally disable things (without the need for a noop.yaml template
that contains matching parameters):

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None

So you'd have two environment files like:

cat enable_floating.yaml:
resource_registry:
  OS::Magnum::FloatingIP: templates/the_floating_config.yaml

cat disable_floating.yaml:
resource_registry:
  OS::Magnum::FloatingIP: OS::Heat::None

Again, this pattern is well proven and works pretty well.

Conditionals may provide an alternative way to do this, but at the expense
of some additional complexity inside the templates.

>Option 2 is not so bad either IMO (AFAIK Trove was doing that at sometime,
>not sure of current status), but the above would be nicer.

Yes, in the past[1] I've commented that the composition model above may be
preferable to jinja templating, but recently I've realized there are pros
and cons to each approach.

The heat composition model works pretty well when you want to combine
multiple pieces (nested stacks) which contain some mixture of different
resources, but it doesn't work so well when you want to iterate over a
common pattern and build a template (e.g based on a loop).

You can use ResourceGroups in some cases, but that adds to the stack depth
(number of nested stacks), and may not be workable for upgrades, so TripleO
is now looking at some limited use of jinja2 templating also, I agree it's
not so bad provided the interfaces presented to the user are carefully
constrained.

Steve

[1] https://review.openstack.org/#/c/211771/

__
OpenStack 

Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Raildo Mascena
On Thu, May 12, 2016 at 3:19 PM Sean Dague  wrote:

> On 05/12/2016 01:47 PM, Morgan Fainberg wrote:
> > This  also comes back to the conversation at the summit. We need to
> > propose the timeline to turn over for V3 (regardless of
> > voting/non-voting today) so that it is possible to set the timeline that
> > is expected for everything to get fixed (and where we are
> > expecting/planning to stop reverting while focusing on fixing the
> > v3-only changes).
> >
> > I am going to ask the Keystone team to set forth the timeline and commit
> > to getting the pieces in order so that we can make v3-only voting rather
> > than playing the propose/revert game we're currently doing. A proposed
> > timeline and gameplan will only help at this point.
>
> A timeline would be good (proposed below), but there are also other bits
> of the approach we should consider.
>
> I would expect, for instance,
> gate-tempest-dsvm-neutron-identity-v3-only-full to be on keystone, and
> it does not appear to be. Is there a reason why?
>

We made this here: https://review.openstack.org/#/c/311169/

>
> With that on keystone, devstack-gate, devstack, tempest the integrated
> space should be pretty well covered. There really is no need to also go
> stick this on glance, nova, cinder, neutron, swift I don't think,
> because they only really use keystone through pretty defined interfaces.
>
> Then some strategic use of nv jobs on things we know would have some
> additional interactions here (because we know they are currently broken
> or they do interesting things) like ironic, heat, trove, would probably
> be pretty useful.
>
> That starts building up the list of known breaks the keystone folks are
> tracking, which should get a drum beat every week in email about
> outstanding issues, and issues fixed.
>

++ Sounds a good Idea, I'll work to make this happen.

>
> The goal of gate-tempest-dsvm-neutron-identity-v3-only-full should not
> be for that to be voting, ever. It should be to use that as a good
> indicator that changing the default in devstack (and thus in the
> majority of upstream jobs) to not ever enable v2.
>

We intend use this job as a indicator to find bugs related to this, but the
idea to make this gate-tempest-dsvm-neutron-identity-v3-only-full voting is
make sure that anyone are sending anything v3 incompatible.

Because of how v3 support exists in projects (largely hidden behind
> keystoneauth), it is really unlikely to rando regress once fixed. There
> just aren't that many knobs a project has that would make that happen.
> So I think we can make forward progress without a voting backstop until
> we get to a point where we can just throw the big red switch (with
> warning) on a Monday (probably early in the Otaca cycle) and say there
> you go. It's now the project job to handle it. And they'll all get fair
> warning for the month prior to the big red switch.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-20, May 16-20

2016-05-12 Thread Doug Hellmann
Focus
-

Teams should have published summaries from summit sessions to the
openstack-dev mailing list and should be working on spec writing and
review for priority features for this cycle.

General Notes
-

I've proposed a patch to the release tools to change the tag in
release announcement emails from "release" to "new". Please comment
on https://review.openstack.org/#/c/312762 if you have any thoughts
about that.

The release cycle model tags now say explicitly that the release
team manages releases, which may require changes to process or
tags depending on what teams want to do. I will be reviewing
the ACLs on all affected repositories over the coming weeks. See
https://review.openstack.org/#/c/308045/ for details.

Release Actions
---

Release liaisons should add their name and contact information to
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

New liaisons should familiarize themselves with the release instructions
in the README file in openstack/releases. Follow up here on the mailing
list if you have questions about the expectations for liaisons for the
first milestone.

Project teams that want to change their release model should do so before
the first milestone, coming up in week R-18.

Important Dates
---

Newton 1 milestone: R-18 June 2

Newton release schedule: http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ovo] NeutronDbObject concurrency issues

2016-05-12 Thread Ilya Chukhnakov
Hi everyone.

I’ve recently found that straightforward use of NeutronDbObject is prone to
concurrency-related problems.

I’ve submitted a patch set [3] with some tests to show that without special
treatment using NeutronDbObject could lead to unexpected results.

Further patch sets will provide acquire_object/acquire_objects contextmanager
methods to the NeutronDbObject class. These methods are to be used in place of
get_object/get_objects whenever the user intends to make changes to the object.
These methods would start an autonested_transaction.

There are (at least) two potential options for the implementation:

1. Based on the DB locks (e.g. SELECT FOR UPDATE/SqlAlchemy with_for_update).

   pros:
 - the object is guaranteed to not be changed while within the context

   cons:
 - prone to deadlocks ([1] and potentially when locking multiple objects)

2. Lock-free CAS based on object version counter. Can use SqlAlchemy version
   counter [2] or add our own. If conflicting changes are detected upon exiting
   the context (i.e. version counter held differs from the one in the DB), will
   raise OSLO RetryRequest exception.

   pros:
 - does not require locking

   cons:
 - require an additional field in the models

While opt.2 only prevents the conflicting changes, but does not guarantee that
the object does not change while within the context, opt.1 may seem
preferential. But even with opt.1 the user should not expect that the changes
made to the object while within the context will get to the database as the
autonested_transaction could fail on flush/commit.

So I’d like to hear others’ opinion on the problem and which of the two
implementation options would be preferred? Or maybe someone has a better idea.

[1] 
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
[2] http://docs.sqlalchemy.org/en/rel_0_9/orm/versioning.html

[3] https://review.openstack.org/#/c/315705/

--
Thanks,
Ilya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] sharing bits between nodes during deployment

2016-05-12 Thread Emilien Macchi
Hi,

During the recent weeks, we've noticed that some features would have a
common challenge to solve:
How to share informations or files between nodes, during a multi-node
deployment.

A few use-cases:

* Deploying Keystone using Fernet tokens

Adam Young started this topic a few weeks ago, we are investigating
how to integrate Fernet in TripleO.
The main challenge is that we want to generate keys periodically for
security purposes.
In multi-node environment, when using HAproxy, you need to make sure
all Fernet keys are the same otherwise you expose the risk of an user
connected to a Keystone server that won't be able to validate Token.
We need a way to:
1) generate tokens periodically. It could be in puppet-keystone, we
already have a crontab example:
https://github.com/openstack/puppet-keystone/tree/master/manifests/cron
2) distribute the key from a node to other nodes <-- that is the challenge.
note: I confirmed with ayoung, and there is no need to restart
Keystone when we change a token.

* Scaling down/up Swift cluster

It's currently impossible to scale down/up a Swift cluster in TripleO
because the ring is built during deployment and never updated
afterwards. It makes Swift not really usable in production without
manual intervention.
Since we don't use a set of classes in puppet-swift that performs this
action because they require PuppetDB, we need to find a way to
redistribute the ring when we add or remove Swift nodes in a TripleO
Cloud.
Maybe we can investigate some Mistral actions or Heat, that would run
the swift commands to re-distribute the ring.

* Dynamic discovery

An example of use-case is: https://review.openstack.org/#/c/304125/
We want to manage Keystone resources for services (ie: nova endpoints,
etc) from the services roles (ie: nova-api), so we stop creating all
endpoints from the keystone profile role.
The current issue with composable services is that until now (tell me
if I'm wrong), keystone role is not aware if whether or not we run
gnocchi-api in our cloud so we don't know if we need to create the
endpoints etc.
On the review, please my (long) comment on Patch Set 12, where we
expose our current challenges.


I hope by this thread that we can boostrap some discussions around
these challenges, because we'll keep having them with the complexity
of OpenStack deployments.
Feel free to comment / correct me / give any feedback on this initial
e-mail, thanks for reading so far.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Sean Dague
On 05/12/2016 01:47 PM, Morgan Fainberg wrote:
> This  also comes back to the conversation at the summit. We need to
> propose the timeline to turn over for V3 (regardless of
> voting/non-voting today) so that it is possible to set the timeline that
> is expected for everything to get fixed (and where we are
> expecting/planning to stop reverting while focusing on fixing the
> v3-only changes).
> 
> I am going to ask the Keystone team to set forth the timeline and commit
> to getting the pieces in order so that we can make v3-only voting rather
> than playing the propose/revert game we're currently doing. A proposed
> timeline and gameplan will only help at this point.

A timeline would be good (proposed below), but there are also other bits
of the approach we should consider.

I would expect, for instance,
gate-tempest-dsvm-neutron-identity-v3-only-full to be on keystone, and
it does not appear to be. Is there a reason why?

With that on keystone, devstack-gate, devstack, tempest the integrated
space should be pretty well covered. There really is no need to also go
stick this on glance, nova, cinder, neutron, swift I don't think,
because they only really use keystone through pretty defined interfaces.

Then some strategic use of nv jobs on things we know would have some
additional interactions here (because we know they are currently broken
or they do interesting things) like ironic, heat, trove, would probably
be pretty useful.

That starts building up the list of known breaks the keystone folks are
tracking, which should get a drum beat every week in email about
outstanding issues, and issues fixed.

The goal of gate-tempest-dsvm-neutron-identity-v3-only-full should not
be for that to be voting, ever. It should be to use that as a good
indicator that changing the default in devstack (and thus in the
majority of upstream jobs) to not ever enable v2.

Because of how v3 support exists in projects (largely hidden behind
keystoneauth), it is really unlikely to rando regress once fixed. There
just aren't that many knobs a project has that would make that happen.
So I think we can make forward progress without a voting backstop until
we get to a point where we can just throw the big red switch (with
warning) on a Monday (probably early in the Otaca cycle) and say there
you go. It's now the project job to handle it. And they'll all get fair
warning for the month prior to the big red switch.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ansible inventory - Would you need another inventory system for your openstack cloud and how would you like it to be?

2016-05-12 Thread Jeremy Stanley
On 2016-05-12 17:25:31 + (+), Sean M. Collins wrote:
> Monty Taylor wrote:
[...]
> > Before we talk about making a new OpenStack Inventory for ansible, let's
> > work on fixing the existing one. We just replaced the nova.py dynamic
> > inventory plugin in the last year with the new openstack.py system.
> 
> Interesting - I'd like to know more. A quick find / grip didn't help me
> find anything, can you help?

I assume the reference was to:

https://github.com/ansible/ansible/blob/devel/contrib/inventory/openstack.py

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Cammann, Tom
The canonical place for the help/documentation must be the online 
documentation. We must provide information in the docs first, additional 
content should be added to the CLI help. The labels field contains arbitrary 
metadata which can be consumed and used by the COE/bay, it has no set structure 
of format. Having an exhaustive list of possible metadata values should not be 
an aim of the command line help. Prior art in this area includes glance image 
properties[1] in which the CLI provides no reference to the values that can be 
passed, nova scheduler hints[2] which also provides no information on the 
values that can be passed.

There is a middle ground we should be treading. Users should not be using the 
CLI as a referencing for working out what label keys and values should be 
passed in to elicit a certain response from the bay, that should found in the 
docs. I am in favor of adding a short list of commonly used labels to the CLI 
help to jog the memory of the user.

I’m all for an additional man page which lists labels documentation, however 
this change is specifically about CLI “ --help”.

[1] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/shell.py#L218
[2] 
https://github.com/openstack/python-novaclient/blob/master/novaclient/v2/shell.py#L526

Tom

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: 12 May 2016 17:58
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: Re: [openstack-dev] [magnum] How to document 'labels'

I’d be in favor of 1.

At the end of the man page or full help text, a URL could be useful for more 
information but since most people using the CLI will have to do a context 
change to access the docs, it is not a simple click but a copy/paste/find the 
browser window which is not so friendly.

Tim

From: Jamie Hannaford 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday 12 May 2016 at 16:04
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: Qun XK Wang >
Subject: Re: [openstack-dev] [magnum] How to document 'labels'


+1 for 1 and 3.



I'm not sure maintainability should discourage us from exposing information to 
the user through the client - we'll face the same maintenance burden as we 
currently do, and IMO it's our job as a team to ensure our docs are up-to-date. 
Any kind of input which touches the API should also live in the API docs, 
because that's in line with every other OpenStack service.



I don't think I've seen documentation exposed via the API before (#2). I think 
it's a lot of work too, and I don't see what benefit it provides.



Jamie




From: Hongbin Lu >
Sent: 11 May 2016 21:52
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap, ‘labels’ 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document ‘labels’. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/…), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin


Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
 - This e-mail message may contain 

Re: [openstack-dev] [glance] Newton priorities, processes, dates, spec owners and reviewers

2016-05-12 Thread Nikhil Komawar


On 5/12/16 1:51 PM, Doug Hellmann wrote:
> Excerpts from Nikhil Komawar's message of 2016-05-12 01:44:06 -0400:
>> Hello all,
>>
>> Here are a few important announcements for the members involved in the
>> Glance community.
>>
>>
>> Priorities:
>>
>> ===
>>
>> * The Glance priorities for Newton were discussed at the contributors'
>> meetup at summit.
>>
>> * There are a few items that were carried forward from Mitaka that are
>> still our priorities and there are a couple of items from the summit
>> that we have made a priority for reviews.
>>
>> Code review priority:
>>
>> * Import refactor
> Is "Import refactor" what you're calling the work on the new API to get
> images into glance to solve the DefCore compatibility issue?
>
> Doug


Yes, we call it that as per the (original) spec review title.


>
>> * Nova v1, v2 support
>>
>> * Image sharing changes
>>
>> * Documentation changes [1], [2]
>>
>>
>> The required attention from Glance team on Nova v1, v2 support is
>> minimal; the people who are actively involved should review the code and
>> the spec.
>>
>>
>> Everyone is encouraged to review the Import refactor work however, if
>> you do not know where to start you can join the informal syncs on
>> #openstack-glance Thursdays at 1330 UTC. If you do not see people
>> chatting you are more than encouraged to highlight the following irc
>> nicks: rosmaita, nikhil (to the very least)
>>
>>
>> Everyone is encouraged to review the Image sharing changes that are
>> currently being discussed. Although, the constructs are not going to
>> hamper the standard image workflows, the experiences of sharing may be
>> different after these changes. There will be subsequent changes to the
>> python-glanceclient for accommodating server changes.
>>
>>
>> Documentation changes are something that we must accommodate in this
>> cycle; thanks to the docs team the code draft was given to us.
>> Documentation liaison is working hard to get it in the right shape and a
>> couple more reviewers are to be assigned to review this change. We need
>> volunteers for the review work.
>>
>>
>> Process to be adopted in Newton:
>>
>> ==
>>
>>
>> Full specs:
>>
>> * For all newly introduced features, API Impacting changes and changes
>> that could either have an impact security or larger impact on operators
>> will need a full spec against the openstack/glance-specs repo.
>>
>> * For each spec, you need to create a corresponding blueprint in
>> launchpad [3] and indicate your intention to target that spec in the
>> newton milestone. You will want to be judicious on selecting the
>> milestone; if we see too many proposals for a particular milestone
>> glance-core team will have to selectively reject some of those or move
>> to a different milestone. Please set the url of the spec on your blueprint.
>>
>> * Please use the template for the full spec [4] and try to complete it
>> as much as possible. A spec that is missing some critical info is likely
>> to not get feedback.
>>
>> * Only blueprints by themselves will not be reviewed. You need a spec
>> associated with a blueprint to get the proposal reviewed.
>>
>> * The reviewers section [5] is very important for us to determine if the
>> team will have enough time to review your spec and code. This
>> information plays important role in planning and prioritize your spec.
>> Reach out to these core-reviewer nicks [6] on #openstack-glance channel
>> to see who is interested in assigning themselves to your spec.
>>
>> * Please make sure that each spec has the problem statement well
>> defined. The problem statement isn't a one liner that indicates -- it
>> would be nice to have this change, admins should do operations that user
>> can't, Glance should do so and so, etc. Problem statement should
>> elaborate your use case and explain what in Glance or OpenStack can be
>> improved, what exists currently, if any, why would it be beneficial to
>> make this change, how would the view of cloud change after this change, etc.
>>
>> * All full specs require +W from PTL/liaison
>>
>>
>> Lite specs:
>>
>> * All proposals that are expected to change the behavior of the system
>> significantly are required to have a lite-spec.
>>
>> * For a lite-spec you do not need a blueprint filed and you don't need
>> to target it to particular milestones. Glance would accept most
>> lite-specs until newton-3 unless a cross-project or another conflicting
>> change is a blocker.
>>
>> * Please make sure that each lite-spec has a well defined problem
>> statement. The problem statement is NOT a one liner that indicates -- it
>> would be nice to have this change, admins should do operations such
>> operations that user can't, Glance should do so and so, etc. Problem
>> statement should elaborate your use case and explain what in Glance or
>> OpenStack can be improved, what exists currently, if any, why would it
>> be beneficial to make this change, how would the view of cloud change
>> after 

Re: [openstack-dev] [glance] Newton priorities, processes, dates, spec owners and reviewers

2016-05-12 Thread Doug Hellmann
Excerpts from Nikhil Komawar's message of 2016-05-12 01:44:06 -0400:
> Hello all,
> 
> Here are a few important announcements for the members involved in the
> Glance community.
> 
> 
> Priorities:
> 
> ===
> 
> * The Glance priorities for Newton were discussed at the contributors'
> meetup at summit.
> 
> * There are a few items that were carried forward from Mitaka that are
> still our priorities and there are a couple of items from the summit
> that we have made a priority for reviews.
> 
> Code review priority:
> 
> * Import refactor

Is "Import refactor" what you're calling the work on the new API to get
images into glance to solve the DefCore compatibility issue?

Doug

> 
> * Nova v1, v2 support
> 
> * Image sharing changes
> 
> * Documentation changes [1], [2]
> 
> 
> The required attention from Glance team on Nova v1, v2 support is
> minimal; the people who are actively involved should review the code and
> the spec.
> 
> 
> Everyone is encouraged to review the Import refactor work however, if
> you do not know where to start you can join the informal syncs on
> #openstack-glance Thursdays at 1330 UTC. If you do not see people
> chatting you are more than encouraged to highlight the following irc
> nicks: rosmaita, nikhil (to the very least)
> 
> 
> Everyone is encouraged to review the Image sharing changes that are
> currently being discussed. Although, the constructs are not going to
> hamper the standard image workflows, the experiences of sharing may be
> different after these changes. There will be subsequent changes to the
> python-glanceclient for accommodating server changes.
> 
> 
> Documentation changes are something that we must accommodate in this
> cycle; thanks to the docs team the code draft was given to us.
> Documentation liaison is working hard to get it in the right shape and a
> couple more reviewers are to be assigned to review this change. We need
> volunteers for the review work.
> 
> 
> Process to be adopted in Newton:
> 
> ==
> 
> 
> Full specs:
> 
> * For all newly introduced features, API Impacting changes and changes
> that could either have an impact security or larger impact on operators
> will need a full spec against the openstack/glance-specs repo.
> 
> * For each spec, you need to create a corresponding blueprint in
> launchpad [3] and indicate your intention to target that spec in the
> newton milestone. You will want to be judicious on selecting the
> milestone; if we see too many proposals for a particular milestone
> glance-core team will have to selectively reject some of those or move
> to a different milestone. Please set the url of the spec on your blueprint.
> 
> * Please use the template for the full spec [4] and try to complete it
> as much as possible. A spec that is missing some critical info is likely
> to not get feedback.
> 
> * Only blueprints by themselves will not be reviewed. You need a spec
> associated with a blueprint to get the proposal reviewed.
> 
> * The reviewers section [5] is very important for us to determine if the
> team will have enough time to review your spec and code. This
> information plays important role in planning and prioritize your spec.
> Reach out to these core-reviewer nicks [6] on #openstack-glance channel
> to see who is interested in assigning themselves to your spec.
> 
> * Please make sure that each spec has the problem statement well
> defined. The problem statement isn't a one liner that indicates -- it
> would be nice to have this change, admins should do operations that user
> can't, Glance should do so and so, etc. Problem statement should
> elaborate your use case and explain what in Glance or OpenStack can be
> improved, what exists currently, if any, why would it be beneficial to
> make this change, how would the view of cloud change after this change, etc.
> 
> * All full specs require +W from PTL/liaison
> 
> 
> Lite specs:
> 
> * All proposals that are expected to change the behavior of the system
> significantly are required to have a lite-spec.
> 
> * For a lite-spec you do not need a blueprint filed and you don't need
> to target it to particular milestones. Glance would accept most
> lite-specs until newton-3 unless a cross-project or another conflicting
> change is a blocker.
> 
> * Please make sure that each lite-spec has a well defined problem
> statement. The problem statement is NOT a one liner that indicates -- it
> would be nice to have this change, admins should do operations such
> operations that user can't, Glance should do so and so, etc. Problem
> statement should elaborate your use case and explain what in Glance or
> OpenStack can be improved, what exists currently, if any, why would it
> be beneficial to make this change, how would the view of cloud change
> after this change, etc.
> 
> * All lite specs should have at least two +2 (agreement from at least
> two core reviewers). There is no need to wait on +W from the PTL but it
> is highly encouraged to 

Re: [openstack-dev] [oslo] Austin summit session recap(s)

2016-05-12 Thread Doug Hellmann
Excerpts from Alexis Lee's message of 2016-05-12 10:22:29 +0100:
> Doug Hellmann said on Wed, May 11, 2016 at 08:45:16AM -0400:
> > Yes, handler, sorry.
> > 
> > I thought they were only unconfigured if the flag was set to do that,
> > and that your patch had the disable_existing_loggers flag set to not do
> > it? Maybe I misunderstood the full meaning of the flag, though, and it
> > only affects loggers and not handlers?
> 
> It only affects loggers. Here's the relevant part of logging:
> 
> 183 def _install_loggers(cp, handlers, disable_existing_loggers):
> ...
> 259 #Disable any old loggers. There's no point deleting
> 260 #them as other threads may continue to hold references
> 261 #and by disabling them, you stop them doing any logging.
> 262 #However, don't disable children of named loggers, as that's
> 263 #probably not what was intended by the user.
> 264 for log in existing:
> 265 logger = root.manager.loggerDict[log]
> 266 if log in child_loggers:
> 267 logger.level = logging.NOTSET
> 268 logger.handlers = []
> 269 logger.propagate = 1
> 270 else:
> 271 logger.disabled = disable_existing_loggers
> 
> The issue I mentioned is that non-child loggers aren't reset if
> disable_existing_loggers is False. It's easy enough to work around that
> though, new patchset is up.

OK, good. I'll stop worrying about this case and go look at your code.  :-)

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Morgan Fainberg
On Thu, May 12, 2016 at 10:42 AM, Sean Dague  wrote:

> We just had to revert another v3 "fix" because it wasn't verified to
> work correctly in the gate - https://review.openstack.org/#/c/315631/
>
> While I realize project-config patches are harder to test, you can do so
> with a bogus devstack-gate change that has the same impact in some cases
> (like the case above).
>
> I think the important bit on moving forward is that every patch here
> which might be disruptive has some manual verification about it working
> posted in review by v3 team members before we approve them.
>
> I also think we need to largely stay non voting on the v3 only job until
> we're quite confident that the vast majority of things are flipped over
> (for instance there remains an issue in nova <=> ironic communication
> with v3 last time I looked). That allows us to fix things faster because
> we don't wedge some slice of the projects in a gate failure.
>
> -Sean
>
> On 05/12/2016 11:08 AM, Raildo Mascena wrote:
> > Hi folks,
> >
> > Although the Identity v2 API is deprecated as of Mitaka [1], some
> > services haven't implemented proper support to v3 yet. For instance,
> > we implemented a patch that made DevStack v3 by default that, when
> > merged, broke a lot of project gates in a few hours [2]. This
> > happened due to specific services incompatibility issues with Keystone
> > v3 API, such as hardcoded v2 usage, usage of removed keystoneclient CLI,
> > requesting v2 service tokens and the lack of keystoneauth session usage.
> >
> > To discuss those points, we did a cross-project work
> > session in the Newton Summit[3]. One point we are working on at this
> > momment is creating gates to ensure the main OpenStack services
> > can live without the Keystone v2 API. Those gates setup devstack with
> > only Identity v3 enabled and run the Tempest suite on this environment.
> >
> > We already did that for a few services, like Nova, Cinder, Glance,
> > Neutron, Swift. We are doing the same job for other services such
> > as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
> >
> > In addition, we are creating jobs to run functional tests for the
> > services on this identity v3-only environment[5]. Also, we have a couple
> > of other fronts that we are doing like removing some hardcoded v2 usage
> > [6], implementing keystoneauth sessions support in clients and APIs [7].
> >
> > Our plan is to keep tackling as many items from the cross-project
> > session etherpad as we can, so we can achieve more confidence in moving
> > to a DevStack working v3-only, making sure everyone is prepared to work
> > with Keystone v3 API.
> >
> > Feedbacks and reviews are very appreciated.
> >
> > [1] https://review.openstack.org/#/c/251530/
> > [2] https://etherpad.openstack.org/p/v3-only-devstack
> > [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
> > [4]
> https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
> > [5]
> https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
> > [6] https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
> > [7] https://review.openstack.org/#/q/topic:use-ksa
> >
> > Cheers,
> >
> > Raildo
> >
> >
> >
>

This  also comes back to the conversation at the summit. We need to propose
the timeline to turn over for V3 (regardless of voting/non-voting today) so
that it is possible to set the timeline that is expected for everything to
get fixed (and where we are expecting/planning to stop reverting while
focusing on fixing the v3-only changes).

I am going to ask the Keystone team to set forth the timeline and commit to
getting the pieces in order so that we can make v3-only voting rather than
playing the propose/revert game we're currently doing. A proposed timeline
and gameplan will only help at this point.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Sean Dague
We just had to revert another v3 "fix" because it wasn't verified to
work correctly in the gate - https://review.openstack.org/#/c/315631/

While I realize project-config patches are harder to test, you can do so
with a bogus devstack-gate change that has the same impact in some cases
(like the case above).

I think the important bit on moving forward is that every patch here
which might be disruptive has some manual verification about it working
posted in review by v3 team members before we approve them.

I also think we need to largely stay non voting on the v3 only job until
we're quite confident that the vast majority of things are flipped over
(for instance there remains an issue in nova <=> ironic communication
with v3 last time I looked). That allows us to fix things faster because
we don't wedge some slice of the projects in a gate failure.

-Sean

On 05/12/2016 11:08 AM, Raildo Mascena wrote:
> Hi folks,
> 
> Although the Identity v2 API is deprecated as of Mitaka [1], some
> services haven't implemented proper support to v3 yet. For instance,
> we implemented a patch that made DevStack v3 by default that, when
> merged, broke a lot of project gates in a few hours [2]. This
> happened due to specific services incompatibility issues with Keystone
> v3 API, such as hardcoded v2 usage, usage of removed keystoneclient CLI,
> requesting v2 service tokens and the lack of keystoneauth session usage.
> 
> To discuss those points, we did a cross-project work
> session in the Newton Summit[3]. One point we are working on at this
> momment is creating gates to ensure the main OpenStack services
> can live without the Keystone v2 API. Those gates setup devstack with
> only Identity v3 enabled and run the Tempest suite on this environment.
> 
> We already did that for a few services, like Nova, Cinder, Glance,
> Neutron, Swift. We are doing the same job for other services such
> as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
> 
> In addition, we are creating jobs to run functional tests for the
> services on this identity v3-only environment[5]. Also, we have a couple
> of other fronts that we are doing like removing some hardcoded v2 usage
> [6], implementing keystoneauth sessions support in clients and APIs [7].
> 
> Our plan is to keep tackling as many items from the cross-project
> session etherpad as we can, so we can achieve more confidence in moving
> to a DevStack working v3-only, making sure everyone is prepared to work
> with Keystone v3 API.
> 
> Feedbacks and reviews are very appreciated.
> 
> [1] https://review.openstack.org/#/c/251530/
> [2] https://etherpad.openstack.org/p/v3-only-devstack
> [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
> [4] 
> https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
> [5] https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
> [6] https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
> [7] https://review.openstack.org/#/q/topic:use-ksa
> 
> Cheers,
> 
> Raildo 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-05-12 Thread Jeremy Stanley
On 2016-05-12 16:27:43 + (+), Michael Krotscheck wrote:
[...]
> Has anyone agreed to do the work to include debian nodes in infra?
> I know we've got Centos7 and Fedora23 now, adding Debian doesn't
> seem like a huge stretch.

We have "debian-jessie" images and workers in Nodepool already. Once
usage picks up a bit, I expect to see Stretch/Sid added as well.

> With that in mind, does Debian have exemption rules for
> frequently-updating packages

It used to have something called "volatile" but more recent releases
use some combination of "updates" and "backports" suites to achieve
this.

> like Firefox?

The "iceweasel" (unbranded Firefox) package in Debian is based on
ESR versions to mostly avoid that madness.

> If so, did Node receive one of these exemptions?

Doesn't look like it. The latest "nodejs" package available in
Jessie is based on 0.10.29, though Stretch (testing for the next
release) and Sid (unstable) have 4.3.x versions presently. Also
nodejs 6.0.0 is available in experimental.

> With Node4 LTS now in maintenance, and Node6 LTS officially released,
> that'll make it tricky for us to stick with whatever's in Sid.
> Non-compatible LTS cycles make for an unhappy infra.
[...]

Looks like with Stretch and beyond we can rely on not having to
import/pin/backport a nodejs 4.x package, but in Jessie we'll likely
need a workaround still.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ansible inventory - Would you need another inventory system for your openstack cloud and how would you like it to be?

2016-05-12 Thread Sean M. Collins
Monty Taylor wrote:
> On 05/09/2016 09:27 AM, Jean-Philippe Evrard wrote:
> > Hello everyone,
> > 
> > I am using ansible for some time now, and I think the current default
> > Ansible inventory system is lacking a few features, at least when
> > working on an OpenStack cloud - whether it's for its deployment, its
> > support or its usage.
> > I'm thinking of developing a new inventory, based on (openstack-)ansible
> > experience.
> 
> Before we talk about making a new OpenStack Inventory for ansible, let's
> work on fixing the existing one. We just replaced the nova.py dynamic
> inventory plugin in the last year with the new openstack.py system.

Interesting - I'd like to know more. A quick find / grip didn't help me
find anything, can you help?

Thanks

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings

2016-05-12 Thread Mike Perez
On 04:07 May 12, Ildikó Váncsa wrote:
> 
> 
> > -Original Message-
> > From: Mike Perez [mailto:m...@openstack.org]
> > Sent: May 11, 2016 23:52
> > To: Ildikó Váncsa
> > Cc: 'D'Angelo, Scott  (scott.dang...@hpe.com)'; 
> > 'Walter A. Boring IV'; 'John Griffith
> >  (john.griffi...@gmail.com)'; 'Matt Riedemann'; 
> > 'Sean McGinnis'; 'John Garbutt 
> > (j...@johngarbutt.com)'; openstack-dev@lists.openstack.org
> > Subject: Re: [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings
> > 
> > On 14:38 May 11, Ildikó Váncsa wrote:
> > > Hi All,
> > >
> > > We will continue the meeting series about the Cinder-Nova interaction 
> > > changes mostly from multiattach  perspective. We have a
> > new meeting slot, which is __Thursday, 1700UTC__ on the 
> > #openstack-meeting-cp channel.
> > >
> > > Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
> > > Summary about ongoing items: 
> > > http://lists.openstack.org/pipermail/openstack-dev/2016-May/094089.html
> > 
> > I don't think this meeting is registered. [1] Take a look at [2].
> 
> A quick question before I move forward with registering this temporary 
> series. As it is a project to project interaction as opposed to a 
> cross-project meeting according to [2] I assume I should pick another IRC 
> channel for this. Is that correct? If yes, is the temporary nature of the 
> meeting series is accepted on other meeting channels as well?
> 
> Thanks,
> /Ildikó

Link me to the irc-meeting review and I'll approve it so Anita allows it.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-12 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-05-12 13:37:32 +0200:
> Tim Bell wrote:
> > [...]
> > I think it will be really difficult to persuade the mainstream projects to 
> > adopt
> > a library if it is not part of Oslo. Developing a common library for quota
> > management outside the scope of the common library framework for OpenStack
> > does not seem to be encouraging the widespread use of delimiter.
> > [...]
> 
> I agree it's hard enough to get Oslo libraries used across the board, I 
> don't think the task would be any easier with an external library.
> 
> One case that would justify being external is if the library was 
> generally useful rather than OpenStack-specific: then the Oslo branding 
> might hinder its out-of-OpenStack adoption. But it doesn't feel like 
> that is the case here ?
> 

In the past we've tried to encourage folks creating very specially
focused libraries for which areas where the existing Oslo team has no
real experience, such as os-win, to set up their own team. The Oslo team
doesn't have to own all libraries.

On the other hand, in this case I think quota management fits in Oslo as
well as messaging or policy do. We have a mechanism in place for managing
sub-teams so signing up to work on quotas doesn't have to mean signing
up to be oslo-core.

The notion that we're going to try to have all projects depend on
something we create but that we *don't* create as part of an official
project is extremely confusing. Whether we make it part of Oslo or part
of its own thing, I think we want this to be official.

Before pushing too hard to bring the new lib under the Oslo umbrella,
I'd like to understand why folks might not want to do that. What "costs"
are perceived?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [eslint-config-openstack] ECMAScript 6 / ECMAScript2015 rules.

2016-05-12 Thread Vitaly Kramskikh
As of now, I've created review requests to enable all the rules used for
Fuel UI except these:

http://eslint.org/docs/rules/object-shorthand
http://eslint.org/docs/rules/prefer-arrow-callback
http://eslint.org/docs/rules/prefer-spread

Enabling them will break valid ES5 code, and I didn't find a way to disable
them for ES5 code - even if env.es6 is false and parserOptions.ecmaVersion
is 5, ESLint applies these rules. So probably they should be enabled after
significant adoption of ES6.

2016-05-12 19:17 GMT+03:00 Michael Krotscheck :

> Fuel has already adopted ES6, and is moving to adopt
> eslint-config-openstack. To help them, Vitaly's started to propose language
> style rules for ES6, which are available to review at the below link:
>
>
> https://review.openstack.org/#/q/topic:es6+project:openstack/eslint-config-openstack
>
> Please take a moment to review these rules. As a reminder, approval for
> eslint-config-openstack requires that a rule receives five positive votes,
> with no negative ones.
>
> Have a great day!
>
> Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Tim Bell
I’d be in favor of 1.

At the end of the man page or full help text, a URL could be useful for more 
information but since most people using the CLI will have to do a context 
change to access the docs, it is not a simple click but a copy/paste/find the 
browser window which is not so friendly.

Tim

From: Jamie Hannaford 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday 12 May 2016 at 16:04
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: Qun XK Wang 
Subject: Re: [openstack-dev] [magnum] How to document 'labels'


+1 for 1 and 3.



I'm not sure maintainability should discourage us from exposing information to 
the user through the client - we'll face the same maintenance burden as we 
currently do, and IMO it's our job as a team to ensure our docs are up-to-date. 
Any kind of input which touches the API should also live in the API docs, 
because that's in line with every other OpenStack service.



I don't think I've seen documentation exposed via the API before (#2). I think 
it's a lot of work too, and I don't see what benefit it provides.



Jamie




From: Hongbin Lu 
Sent: 11 May 2016 21:52
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap, ‘labels’ 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document ‘labels’. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/…), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin


Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [packaging] How can Upper Constraints be used by packagers

2016-05-12 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-05-12 11:01:52 -0500:
> On 05/12/2016 09:57 AM, Igor Yozhikov wrote:
> > 
> > Hello.
> > 
> > According to proposed changes in G-R
> > (https://etherpad.openstack.org/p/newton-global-requirements)  related
> > to ranges/bounds I want to clarify situation for Linux packagers.
> > 
> > Very often packages for requirements mentioned in requirements.txt or
> > global-requirements  file are built using code versions set in lower
> > bounds. Usage of broader range for requirements will lead to complex
> > calculations of minimum version of requirement which will satisfy all of
> > projects which are using it. From perspective of packaging - must be
> > only one installed version of requirement in a system.
> > 
> > To avoid this complexity and provide co-installability, upper
> > constraints could be used as the source of minimal version for
> > requirements in system package.
> > 
> 
> Hi, Gentoo packager here :D
> 
> The basic gist of it is that g-r.txt is what's expected to work and
> u-c.txt is what's tested to work.  There have been specs out there to
> test a lower-contraints.txt file but I haven't seen it go anywhere quite
> yet.
> 

As Matthew said, the upper constraints list is the set of things we're
actively testing and the range in the global requirements list is the
things we believe to work. We have both to give distros some flexibility
in what they actually package.  That said, I heard pretty consistently
from distro folks at the summit that they try to package the most current
versions of all dependencies. That's more or less what we're testing
with the upper constraints in place, except for the cases where we know
an update breaks something.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-05-12 Thread Michael Krotscheck
Responses inline:

On 04/21/2016 04:35 PM, Michael Krotscheck wrote:
> > New: Xenial Build Nodes
> >
> > As of two weeks ago, OpenStack’s Infrastructure is running a version of
> > Node.js and npm more recent than what is available on Trusty LTS.
>

Update: We're now on xenial. Yay LTS!


> > Ultimately, we would like to converge this version on Node4 LTS, the
> > release version maintained by the Node foundation. The easiest way to do
> > this is to simply piggyback on Infra’s impending adoption of Xenial
> > build nodes, though some work is required to ensure this transition goes
> > smoothly.
>
> While this is a nice intention, I'd like to remind folks that
> historically, all JS and Node stuff have been maintained in Debian. So
> the work to maintain packages are done in Sid. So best would be to make
> sure the toolchain works there, as this is the way to go also for
> getting stuff pushed to Ubuntu (ie: via Debian).
>

That sounds great, though at this time I do not believe that we're gating
on debian. Has anyone agreed to do the work to include debian nodes in
infra? I know we've got Centos7 and Fedora23 now, adding Debian doesn't
seem like a huge stretch.

With that in mind, does Debian have exemption rules for frequently-updating
packages like Firefox? If so, did Node receive one of these exemptions?
With Node4 LTS now in maintenance, and Node6 LTS officially released,
that'll make it tricky for us to stick with whatever's in Sid.
Non-compatible LTS cycles make for an unhappy infra.


> I'm hereby volunteering to help if we need JS or Node packaging to
> happen. I haven't started yet working on that (like packaging Gulp, see
> later in this message...) but I will, sooner or later.
>

Woot! Thank you!


> As I understand, the way to package NPM stuff is to use npm2deb. Once
> we have npm packages pushed as NodeJS package, they would later on
> be aggregated by some tools. Fuel uses Gulp and RequireJS to do that.
> I'd be nice if we were standardizing on some tooling, so that downstream
> package maintainers wouldn't have to do the work multiple times. Has
> this discussion already happened?


The discussion has happened for _some_ tools (eslint), however under
OpenStack's governance we can only 'suggest' what people should use, not
enforce it. With that in mind, we've just started the
'js-generator-openstack' project, which will evolve to handle dependency
version maintenance, tooling updates, and new project bootstrapping. I
expect that most of the discussions about "What tools do we use" will
happen there.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Simon Pasquier
On Thu, May 12, 2016 at 6:13 PM, Alex Schultz  wrote:

>
>
> On Thu, May 12, 2016 at 10:00 AM, Simon Pasquier 
> wrote:
>
>> First of all, I'm +1 on this. But as Matt says, it needs to take care of
>> the plugins.
>> A few examples I know of are the Zabbix plugin [1] and the LMA collector
>> plugin [2] that modify the HAProxy configuration of the controller nodes.
>> How could they work with your patch?
>>
>
> So you are leveraging the haproxy on the controller for this
> configuration? I thought I had asked in irc about this and was under the
> impression that you're using your own haproxy configuration on a different
> host(s).  I'll have to figure out an alternative to support plugin haproxy
> configurations as with that patch it would just ignore those configurations.
>

For other plugins, we use dedicated HAProxy nodes but not for these 2 (at
least).
I admit that it wasn't a very good idea but at that time, it was "oh
perfect, /etc/haproxy/conf.d is there, let's use it!". We'll try to think
about a solution on our end too.

Simon


>
> Thanks,
> -Alex
>
>
>> Simon
>>
>> [1]
>> https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
>> [2]
>> https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81
>>
>> On Thu, May 12, 2016 at 4:42 PM, Alex Schultz 
>> wrote:
>>
>>>
>>>
>>> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn <
>>> mmoses...@mirantis.com> wrote:
>>>
 Hi Alex,

 Collapsing our haproxy tasks makes it a bit trickier for plugin
 developers. We would still be able to control it via hiera, but it
 means more effort for a plugin developer to run haproxy for a given
 set of services, but explicitly exclude all those it doesn't intend to
 run on a custom role. Maybe you can think of some intermediate step
 that wouldn't add a burden to a plugin developer that would want to
 just proxy keystone and mysql, but not nova/neutron/glance/cinder?


>>> So none of the existing logic has changed around the enabling/disabling
>>> of those tasks within hiera.  The logic remains the same as I'm just
>>> including the osnailyfacter::openstack_haproxy::openstack_haproxy_*
>>> classes[0] within the haproxy task.  The only difference is that the task
>>> logic no longer would control if something was included like sahara.
>>>
>>> -Alex
>>>
>>> [0]
>>> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>>>
>>>
 On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
 wrote:
 > Hey Fuelers,
 >
 > We have been using our own fork of the haproxy module within
 fuel-library
 > for some time. This also includes relying on a MOS specific version of
 > haproxy that carries the conf.d hack.  Unfortunately this has meant
 that
 > we've needed to leverage the MOS version of this package when
 deploying with
 > UCA.  As far as I can tell, there is no actual need to continue to do
 this
 > anymore. I have been working on switching to the upstream haproxy
 module[0]
 > so we can drop this custom haproxy package and leverage the upstream
 haproxy
 > module.
 >
 > In order to properly switch to the upstream haproxy module, we need to
 > collapse the haproxy tasks into a single task. With the migration to
 > leveraging classes for task functionality, this is pretty straight
 forward.
 > In my review I have left the old tasks still in place to make sure to
 not
 > break any previous dependencies but they old tasks no longer do
 anything.
 > The next step after this initial merge would be to cleanup the
 haproxy code
 > and extract it from the old openstack module.
 >
 > Please be aware that if you were relying on the conf.d method of
 injecting
 > configurations for haproxy, this will break you. Please speak up now
 so we
 > can figure out an alternative solution.
 >
 > Thanks,
 > -Alex
 >
 >
 > [0] https://review.openstack.org/#/c/307538/
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> 

[openstack-dev] [javascript] [eslint-config-openstack] ECMAScript 6 / ECMAScript2015 rules.

2016-05-12 Thread Michael Krotscheck
Fuel has already adopted ES6, and is moving to adopt
eslint-config-openstack. To help them, Vitaly's started to propose language
style rules for ES6, which are available to review at the below link:

https://review.openstack.org/#/q/topic:es6+project:openstack/eslint-config-openstack

Please take a moment to review these rules. As a reminder, approval for
eslint-config-openstack requires that a rule receives five positive votes,
with no negative ones.

Have a great day!

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Alex Schultz
On Thu, May 12, 2016 at 10:00 AM, Simon Pasquier 
wrote:

> First of all, I'm +1 on this. But as Matt says, it needs to take care of
> the plugins.
> A few examples I know of are the Zabbix plugin [1] and the LMA collector
> plugin [2] that modify the HAProxy configuration of the controller nodes.
> How could they work with your patch?
>

So you are leveraging the haproxy on the controller for this configuration?
I thought I had asked in irc about this and was under the impression that
you're using your own haproxy configuration on a different host(s).  I'll
have to figure out an alternative to support plugin haproxy configurations
as with that patch it would just ignore those configurations.

Thanks,
-Alex


> Simon
>
> [1]
> https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
> [2]
> https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81
>
> On Thu, May 12, 2016 at 4:42 PM, Alex Schultz 
> wrote:
>
>>
>>
>> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn > > wrote:
>>
>>> Hi Alex,
>>>
>>> Collapsing our haproxy tasks makes it a bit trickier for plugin
>>> developers. We would still be able to control it via hiera, but it
>>> means more effort for a plugin developer to run haproxy for a given
>>> set of services, but explicitly exclude all those it doesn't intend to
>>> run on a custom role. Maybe you can think of some intermediate step
>>> that wouldn't add a burden to a plugin developer that would want to
>>> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>>>
>>>
>> So none of the existing logic has changed around the enabling/disabling
>> of those tasks within hiera.  The logic remains the same as I'm just
>> including the osnailyfacter::openstack_haproxy::openstack_haproxy_*
>> classes[0] within the haproxy task.  The only difference is that the task
>> logic no longer would control if something was included like sahara.
>>
>> -Alex
>>
>> [0]
>> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>>
>>
>>> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
>>> wrote:
>>> > Hey Fuelers,
>>> >
>>> > We have been using our own fork of the haproxy module within
>>> fuel-library
>>> > for some time. This also includes relying on a MOS specific version of
>>> > haproxy that carries the conf.d hack.  Unfortunately this has meant
>>> that
>>> > we've needed to leverage the MOS version of this package when
>>> deploying with
>>> > UCA.  As far as I can tell, there is no actual need to continue to do
>>> this
>>> > anymore. I have been working on switching to the upstream haproxy
>>> module[0]
>>> > so we can drop this custom haproxy package and leverage the upstream
>>> haproxy
>>> > module.
>>> >
>>> > In order to properly switch to the upstream haproxy module, we need to
>>> > collapse the haproxy tasks into a single task. With the migration to
>>> > leveraging classes for task functionality, this is pretty straight
>>> forward.
>>> > In my review I have left the old tasks still in place to make sure to
>>> not
>>> > break any previous dependencies but they old tasks no longer do
>>> anything.
>>> > The next step after this initial merge would be to cleanup the haproxy
>>> code
>>> > and extract it from the old openstack module.
>>> >
>>> > Please be aware that if you were relying on the conf.d method of
>>> injecting
>>> > configurations for haproxy, this will break you. Please speak up now
>>> so we
>>> > can figure out an alternative solution.
>>> >
>>> > Thanks,
>>> > -Alex
>>> >
>>> >
>>> > [0] https://review.openstack.org/#/c/307538/
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [ironic][openstackclient] deprecation process

2016-05-12 Thread Steve Martinelli
I thought we had this written down somewhere but I can't find it. The OSC
deprecation process is two major releases. So if something was deprecated
in L, it is removed in N. This goes for optional parameters being renamed /
dropped, or commands being dropped / renamed.

If an optional parameter is being deprecated (say --tenant in favor of
--project), then we usually add a mutually exclusive group for these, and
force the user to only pick one, log a deprecation message if they pick the
wrong one, and suppress the help text of the old option. See [1] for an
example

If a command is being deprecated it's a bit easier, just log a deprecation
message and remove it. If it is being renamed then you can also have it
subclass the new command.

As always, the deprecation message should indicate which command / option
to use.

[1]
https://github.com/openstack/python-openstackclient/blob/b4c3adbd308e65679489c4c64680cbe0324f4bc7/openstackclient/volume/v1/volume.py#L53-L63

On Thu, May 12, 2016 at 9:46 AM, Loo, Ruby  wrote:

> Hi OpenStackClient folks,
>
> Ironic is following the standard deprecation process [1]. We added an OSC
> plugin and realized that we didn’t get the commands quite right. This patch
> [2] adds the right commands and deprecates the wrong ones. My question is
> what the deprecation process might be. Since it is a plugin to OSC, should
> it follow OSC’s deprecation process and if so, what might that process be?
> Or since the commands are related to ironic, should it follow ironic’s
> deprecation process? In particular, I wanted to know how long should/must
> we support those deprecated commands.
>
> For the user’s sake, it seems like it would make sense that all OSC
> (plugin or not, does the user know the difference?) commands follow the
> same deprecation policy.
>
> I took a quick look and didn’t see anything documented about this, so I
> might have missed it.
>
> What sez you?
>
> —ruby
>
> [1]
> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
> [2] https://review.openstack.org/#/c/284160
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [packaging] How can Upper Constraints be used by packagers

2016-05-12 Thread Matthew Thode
On 05/12/2016 09:57 AM, Igor Yozhikov wrote:
> 
> Hello.
> 
> According to proposed changes in G-R
> (https://etherpad.openstack.org/p/newton-global-requirements)  related
> to ranges/bounds I want to clarify situation for Linux packagers.
> 
> Very often packages for requirements mentioned in requirements.txt or
> global-requirements  file are built using code versions set in lower
> bounds. Usage of broader range for requirements will lead to complex
> calculations of minimum version of requirement which will satisfy all of
> projects which are using it. From perspective of packaging - must be
> only one installed version of requirement in a system.
> 
> To avoid this complexity and provide co-installability, upper
> constraints could be used as the source of minimal version for
> requirements in system package.
> 

Hi, Gentoo packager here :D

The basic gist of it is that g-r.txt is what's expected to work and
u-c.txt is what's tested to work.  There have been specs out there to
test a lower-contraints.txt file but I haven't seen it go anywhere quite
yet.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Simon Pasquier
First of all, I'm +1 on this. But as Matt says, it needs to take care of
the plugins.
A few examples I know of are the Zabbix plugin [1] and the LMA collector
plugin [2] that modify the HAProxy configuration of the controller nodes.
How could they work with your patch?
Simon

[1]
https://github.com/openstack/fuel-plugin-external-zabbix/blob/2.5.0/deployment_scripts/puppet/modules/plugin_zabbix/manifests/ha/haproxy.pp#L16
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/manifests/aggregator.pp#L60-L81

On Thu, May 12, 2016 at 4:42 PM, Alex Schultz  wrote:

>
>
> On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn 
> wrote:
>
>> Hi Alex,
>>
>> Collapsing our haproxy tasks makes it a bit trickier for plugin
>> developers. We would still be able to control it via hiera, but it
>> means more effort for a plugin developer to run haproxy for a given
>> set of services, but explicitly exclude all those it doesn't intend to
>> run on a custom role. Maybe you can think of some intermediate step
>> that wouldn't add a burden to a plugin developer that would want to
>> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>>
>>
> So none of the existing logic has changed around the enabling/disabling of
> those tasks within hiera.  The logic remains the same as I'm just including
> the osnailyfacter::openstack_haproxy::openstack_haproxy_* classes[0] within
> the haproxy task.  The only difference is that the task logic no longer
> would control if something was included like sahara.
>
> -Alex
>
> [0]
> https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp
>
>
>> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
>> wrote:
>> > Hey Fuelers,
>> >
>> > We have been using our own fork of the haproxy module within
>> fuel-library
>> > for some time. This also includes relying on a MOS specific version of
>> > haproxy that carries the conf.d hack.  Unfortunately this has meant that
>> > we've needed to leverage the MOS version of this package when deploying
>> with
>> > UCA.  As far as I can tell, there is no actual need to continue to do
>> this
>> > anymore. I have been working on switching to the upstream haproxy
>> module[0]
>> > so we can drop this custom haproxy package and leverage the upstream
>> haproxy
>> > module.
>> >
>> > In order to properly switch to the upstream haproxy module, we need to
>> > collapse the haproxy tasks into a single task. With the migration to
>> > leveraging classes for task functionality, this is pretty straight
>> forward.
>> > In my review I have left the old tasks still in place to make sure to
>> not
>> > break any previous dependencies but they old tasks no longer do
>> anything.
>> > The next step after this initial merge would be to cleanup the haproxy
>> code
>> > and extract it from the old openstack module.
>> >
>> > Please be aware that if you were relying on the conf.d method of
>> injecting
>> > configurations for haproxy, this will break you. Please speak up now so
>> we
>> > can figure out an alternative solution.
>> >
>> > Thanks,
>> > -Alex
>> >
>> >
>> > [0] https://review.openstack.org/#/c/307538/
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Port binding query

2016-05-12 Thread Antoni Segura Puimedon
On Thu, May 12, 2016 at 4:50 PM, Neil Jerram  wrote:

> I'm trying Kuryr with networking-calico and think I've hit an unhelpful
> inconsistency. A Neutron port has 'id' and 'device_id' fields that are
> usually different. When Nova does VIF binding for a Neutron port, it
> generates the Linux device name from 'tap' + port['id']. But when Kuryr
> does VIF binding for a Neutron port, I think it generates the Linux device
> name from 'tap' + port['device_id'].
>
> Thoughts? Does that sound right, or have I misread the code and my logs?
> If it's correct, it marginally impacts the ability to use identical agent
> and Neutron driver/plugin code for the two cases (Nova and Kuryr).
>

I think we are supposed to behave like Nova, binding wise.

@Banix: Can you confirm that it is a bug and not a feature?

>From a quick grepping I see hat nova sets the name to be:

nova/network/neutronv2/api.py:devname = "tap" +
current_neutron_port['id']

Whereas in Kuryr we use the first 8 characters of the Docker endpoint id.


>
> Thanks,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Changes to Ramdisk and iPXE defaults in Devstack and many gate jobs

2016-05-12 Thread Jay Faulkner
Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic devstack is in 
the gate, changing the default ironic-python-agent (IPA) ramdisk from CoreOS to 
TinyIPA, and changing iPXE to default enabled.


As part of the work to improve and speed up gate jobs, we determined that using 
iPXE speeds up deployments and makes them more reliable by using http to 
transfer ramdisks instead of tftp. Additionally, the TinyIPA image, in 
development over the last few months, uses less ram and is smaller, allowing 
faster transfers and more simultaneous VMs to run in the gate.


In addition to changing the devstack default, there's also a patch up: 
https://review.openstack.org/#/c/313800/ to change most Ironic jobs to use iPXE 
and TinyIPA. This change will make IPA have voting check jobs and tarball 
publishing jobs for supported ramdisks (CoreOS and TinyIPA). Ironic (and any 
other projects other than IPA) will use the publicly published tinyipa image.


In summary:

- Devstack changes (merging now):

  - Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

- Gate changes (needs review at: https://review.openstack.org/#/c/313800/ )

  - Ironic-Python-Agent

- Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

- Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

- Change all jobs but one to use iPXE

- Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in 
#openstack-ironic.


P.S. I welcome users of the DIB ramdisk to help make a job to run against IPA. 
All supported ramdisks should be checked in IPA's gate to avoid breakage as IPA 
is inherently dependent on its environment.



Thanks,

Jay Faulkner (JayF)

OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Jay Pipes

Gotcha. In that case, yeah, kill it :)

On 05/12/2016 11:41 AM, Erno Kuvaja wrote:

On Thu, May 12, 2016 at 4:23 PM, Jay Pipes > wrote:

On 05/11/2016 11:51 PM, Flavio Percoco wrote:

Greetings,

The Glance team is evaluating the needs and usefulness of the Glance
Registry
service and this email is a request for feedback from the
overall community
before the team moves forward with anything.

Historically, there have been reasons to create this service. Some
deployments
use it to hide database credentials from Glance public
endpoints, others
use it
for scaling purposes and others because v1 depends on it. This
is a good
time
for the team to re-evaluate the need of these services since v2
doesn't
depend
on it.

So, here's the big question:

Why do you think this service should be kept around?


Question... does the Glare project essentially replace any
functionality that was originally in Glance Registry?

Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


No, the v2 API-service functionality to be able to talk to the db
directly did.

- Erno


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Erno Kuvaja
On Thu, May 12, 2016 at 4:23 PM, Jay Pipes  wrote:

> On 05/11/2016 11:51 PM, Flavio Percoco wrote:
>
>> Greetings,
>>
>> The Glance team is evaluating the needs and usefulness of the Glance
>> Registry
>> service and this email is a request for feedback from the overall
>> community
>> before the team moves forward with anything.
>>
>> Historically, there have been reasons to create this service. Some
>> deployments
>> use it to hide database credentials from Glance public endpoints, others
>> use it
>> for scaling purposes and others because v1 depends on it. This is a good
>> time
>> for the team to re-evaluate the need of these services since v2 doesn't
>> depend
>> on it.
>>
>> So, here's the big question:
>>
>> Why do you think this service should be kept around?
>>
>
> Question... does the Glare project essentially replace any functionality
> that was originally in Glance Registry?
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

No, the v2 API-service functionality to be able to talk to the db directly
did.

- Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Jinja2 for Heat template

2016-05-12 Thread Hongbin Lu
We discussed the management of Heat templates several times. It seems the 
consensus is to leverage the *conditionals*feature from Heat (option #1). From 
the past discussion, it sounds like option #2 or #3 will significantly 
complicate our Heat templates, thus incurring burden on maintenance.

However, I agree with Yuanying that option #1 will make Newton (or newer) 
version of Magnum incompatible with Mitaka (or older) version of OpenStack. A 
solution I can think of is to have a Jinja2 version of Heat template in the 
contrib folder, so that operators can swap the Heat templates if they want to 
run newer version of Magnum with older version of OpenStack. Thoughts.

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: May-12-16 6:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Jinja2 for Heat template

Hi,
Thanks for your helpful comment.

I didn’t know about the pattern you suggested.
We often want to “if” or “for” etc…

For example,
* if private network is supplied as parameter, disable creating network 
resource.
* if https parameter is enable, tcp 6443 port should be opened instead of 8080 
at“OS::Neutron::SecurityGroup".
* if https parameter is enable, loadbalancing protocol should be TCP instead of 
HTTP

and so on.
So, I want to Jinja2 template to manage it.

I’ll try to use the composition model above,
and also test the limited use of jinja2 templating.


Thanks
- OTSUKA, Yuanying



2016年5月12日(木) 17:46 Steven Hardy >:
On Thu, May 12, 2016 at 11:08:02AM +0300, Pavlo Shchelokovskyy wrote:
>Hi,
>
>not sure why 3 will bring chaos when implemented properly.

I agree - heat is designed with composition in mind, and e.g in TripleO
we're making heavy use of it for optional configurations and it works
pretty well:

http://docs.openstack.org/developer/heat/template_guide/composition.html

https://www.youtube.com/watch?v=fw0JhywwA1E

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-1-roles-and.html

https://github.com/openstack/tripleo-heat-templates/tree/master/environments

>Can you abstract the "thing" (sorry, not quite familiar with Magnum) that
>needs FP + FP itself into a custom resource/nested stack? Then you could
>use single master template plus two environments (one with FP, one
>without), and choose which one to use right where you have this logic
>split in your code.

Yes, this is exactly the model we make heavy use of in TripleO, it works
pretty well.

Note there's now an OS::Heat::None resource in heat, which makes it easy to
conditionally disable things (without the need for a noop.yaml template
that contains matching parameters):

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None

So you'd have two environment files like:

cat enable_floating.yaml:
resource_registry:
  OS::Magnum::FloatingIP: templates/the_floating_config.yaml

cat disable_floating.yaml:
resource_registry:
  OS::Magnum::FloatingIP: OS::Heat::None

Again, this pattern is well proven and works pretty well.

Conditionals may provide an alternative way to do this, but at the expense
of some additional complexity inside the templates.

>Option 2 is not so bad either IMO (AFAIK Trove was doing that at sometime,
>not sure of current status), but the above would be nicer.

Yes, in the past[1] I've commented that the composition model above may be
preferable to jinja templating, but recently I've realized there are pros
and cons to each approach.

The heat composition model works pretty well when you want to combine
multiple pieces (nested stacks) which contain some mixture of different
resources, but it doesn't work so well when you want to iterate over a
common pattern and build a template (e.g based on a loop).

You can use ResourceGroups in some cases, but that adds to the stack depth
(number of nested stacks), and may not be workable for upgrades, so TripleO
is now looking at some limited use of jinja2 templating also, I agree it's
not so bad provided the interfaces presented to the user are carefully
constrained.

Steve

[1] https://review.openstack.org/#/c/211771/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-12 Thread Carl Baldwin
Hi,

Segments are now a first class thing in Neutron with the merge of this
patch [1].  It exposes API for segments directly.  With ML2, it is
currently only possible to view segments that have been created
through the provider net or multi-provider net extensions.  This can
only be done at network creation time.

In order to allow multi-segmented routed provider networks to grow and
shrink over time, it is necessary to allow creation and deletion of
segments through the new segment endpoint.  Hong Hui Xiao has offered
to help with this.

We need to provide the integration between the service plugin that
provides the segments endpoint with ML2 to allow the creates and
deletes to work properly.  We'd like to here from ML2 experts out
there on how this integration can proceed.  Is there any caution that
we need to take?  What are the non-obvious aspects of this that we're
not thinking about?

Carl Baldwin

[1] https://review.openstack.org/#/c/296603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] glance-registry deprecation: Request for feedback

2016-05-12 Thread Jay Pipes

On 05/11/2016 11:51 PM, Flavio Percoco wrote:

Greetings,

The Glance team is evaluating the needs and usefulness of the Glance
Registry
service and this email is a request for feedback from the overall community
before the team moves forward with anything.

Historically, there have been reasons to create this service. Some
deployments
use it to hide database credentials from Glance public endpoints, others
use it
for scaling purposes and others because v1 depends on it. This is a good
time
for the team to re-evaluate the need of these services since v2 doesn't
depend
on it.

So, here's the big question:

Why do you think this service should be kept around?


Question... does the Glare project essentially replace any functionality 
that was originally in Glance Registry?


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] How to contribute to kolla

2016-05-12 Thread Steven Dake (stdake)
Hu,

I'd recommend joining irc and asking on #openstack-infra.  They can help you 
live debug your connection to gerrit.  I have also seen Qiming's response, 
which may or may not help.

Regards
-steve


From: "hu.zhiji...@zte.com.cn" 
>
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Thursday, May 12, 2016 at 12:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kolla] How to contribute to kolla

Hi all kolla fellows,

Previously, I managed submit a code review to kolla repo but now I couldn't. Do 
I need additional rights or configurations? Below is my git remote -a output:

gerrit  https://hu...@review.openstack.org/openstack/kolla.git (fetch)
gerrit  https://hu...@review.openstack.org/openstack/kolla.git (push)
origin  https://hu...@review.openstack.org/openstack/kolla (fetch)
origin  https://hu...@review.openstack.org/openstack/kolla (push)

Please help to take look at this.


Many thanks!

Zhijiang




ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Newton Summit Freezer Sessions Recap

2016-05-12 Thread Mathieu, Pierre-Arthur
Hello everyone, 

Here is the recap of what happened during the Newton Summit in Austin
concerning Freezer:

We had one presentation animated by Fausto Marzi (daemontool) and Fabrizio
Fresco (frescof) and as well as four developper sessions.

Overall, we have noticed a large attendance and a big interest from the
community.

We would like to thank anyone who attended one of our session!


Presentation: "From Backup/Restore aaS to a fully DR solution for OpenStack"
  Here is the video: [1]


Session 1 : "Backup your OpenStack infrastructure"
  Etherpad: [2]
  With that session, we had the goal to gather as much feedback as possible
  about what needs to be backed up in an OpenStack Infrastructure.
  Main Features to implement in the future:
- Support of PosgreSQL backup
- Support of Elasticsearch backup
- Support of Ceph (at least cluster map) backup


Session 2 : "Backup/Restore as a Service"
  Etherpad: [3]
  With that session, we had the goal to gather feedback on what freezer needs
  to implement to be a better Backup/Restore as a Service solution.
  We also wanted to start the discussion about scalability which lead to
  talking about deduplication and block-baced incremental.
  Main features to implement in the future:
   - Oslo policy (WIP)
   - Block-based incremental (WIP)
   - Agent deployment automation (Cloudinit, Ansible, Puppet, Chef)
   - Remote backup / agentless (Still under discussion)
   - UX improvment
   - Barbican integration to help with encryption key managment
   - Audit of backups
   - Mistral / Heat integration
  

Session 3 : "Disaster Recovery"
  Etherpad: [4]
  During this session, we described again the plan we have to provide disaster
  recovery with freezer through freezer-dr.


Session 4: "Contributors meetup"
  Etherpad: [5]
  During this session, we went back on some freezer basics before explaining
  the freezer-agent refactoring that will happen during the Newton cycle.
  We then had a conversation with Saggi Mizrahi about Smaug and its possible
  integration with Freezer.
  We spoke about Ironic-agent based backups that could lead to an easy tripleO
  support.
  Last topic was agentless backup which we want to plan during the cycle.
  

Also, we would like to welcome the team from Cloudbase! They are going to help
us with the microsoft side of the development.


Here is the high level roadmap / priority list for the Newton cycle:
  - Hardening, testing
  - Agent refactoring to add plugins layers
  - Documentation refactoring
  - Providing automation scripts
  - Integration with other projects (Heat, Barbican, Mistral, Smaug)
  - Deduplication and agentless planning
  - Disaster recovery kick-off



[1] 
https://www.openstack.org/videos/video/freezer-from-backuprestore-aas-to-a-fully-dr-solution-for-openstack
[2] https://etherpad.openstack.org/p/freezer_austin_session_backup_os_infra
[3] https://etherpad.openstack.org/p/freezer_austin_session_baas
[4] https://etherpad.openstack.org/p/freezer_austin_session_disaster_recovery
[5] https://etherpad.openstack.org/p/freezer_austin_session_contributors_meetup


Thank you for reading, 

Pierre, for the Freezer Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Raildo Mascena
Hi folks,

Although the Identity v2 API is deprecated as of Mitaka [1], some services
haven't implemented proper support to v3 yet. For instance, we implemented
a patch that made DevStack v3 by default that, when merged, broke a lot of
project gates in a few hours [2]. This happened due to specific services in
compatibility issues with Keystone v3 API, such as hardcoded v2 usage,
usage of removed keystoneclient CLI, requesting v2 service tokens and the
lack of keystoneauth session usage.

To discuss those points, we did a cross-project work session in the Newton
Summit[3]. One point we are working on at this momment is creating gates to
ensure the main OpenStack services can live without the Keystone v2 API. Those
gates setup devstack with only Identity v3 enabled and run the Tempest suite
 on this environment.

We already did that for a few services, like Nova, Cinder, Glance, Neutron,
Swift. We are doing the same job for other services such as Ironic, Magnum,
Ceilometer, Heat and Barbican [4].

In addition, we are creating jobs to run functional tests for the services
on this identity v3-only environment[5]. Also, we have a couple of other
fronts that we are doing like removing some hardcoded v2 usage [6],
implementing keystoneauth sessions support in clients and APIs [7].

Our plan is to keep tackling as many items from the cross-project session
etherpad as we can, so we can achieve more confidence in moving to a DevStack
working v3-only, making sure everyone is prepared to work with Keystone v3
API.

Feedbacks and reviews are very appreciated.

[1] https://review.openstack.org/#/c/251530/
[2] https://etherpad.openstack.org/p/v3-only-devstack
[3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
[4]
https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
[5] https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
[6] https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
[7] https://review.openstack.org/#/q/topic:use-ksa

Cheers,

Raildo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-12 Thread Neil Jerram
On 09/05/16 22:57, Matt Kassawara wrote:
> At each summit, I speak with a variety of developers from different
> projects about the apparent lack of contributions to the central
> documentation. At previous summits, the most common complaint involved
> using DocBook. After converting most of the documentation to RST, the
> most common complaint at the recent summit involves the review
> process, particularly the lengthy amount of time patches sit in the
> review queue with -1s for various "conventions" problems such as
> structure, formatting, grammar, spelling, etc. Unlike most OpenStack
> developers that focus on a particular project, the documentation team
> covers all projects and lacks the capacity to understand each one
> enough to contribute and maintain technically accurate documentation
> in a timely manner. However, covering all projects enables the
> documentation team to organize and present the documentation to
> various audiences, primarily operators and users, that consume
> OpenStack as a coherent product. In other words, the entire process
> relies on developers contributing to the central documentation. So,
> before developer frustrations drive some or all projects to move their
> documentation in-tree which which negatively impacts the goal of
> presenting a coherent product, I suggest establishing an agreement
> between developers and the documentation team regarding the review
> process.
>
> As much as the documentation team wants to present OpenStack as a
> coherent product, it contains many projects with different
> contribution processes. In some cases, individual developers prefer to
> contribute in unique ways. Thus, the conventional "one-size-fits-all"
> approach that the documentation team historically takes with reviewing
> contributions from developers yields various levels of frustration
> among projects and developers. I ran a potential solution by various
> developers during the recent summit and received enough positive
> feedback to discuss it with a larger audience. So, here goes...
>
> A project or individual developer decides the level of documentation
> team involvement with reviewing patches. The developer adds a WIP to
> the documentation patch while adding content to prevent premature
> reviews by the documentation team. Once the content achieves a
> sufficient level of technical accuracy, the developer removes the WIP
> and adds a comment in the review indicating of the following preferences:
>
> 1) The documentation team should review the patch for compliance with
> conventions (proper structure, format, grammar, spelling, etc.) and
> provide feedback to the developer who updates the patch.
> 2) The documentation team should modify the patch to make it compliant
> and ask the developer for a final review to prior to merging it.
> 3) The documentation team should only modify the patch to make it
> build (if necessary) and quickly merge it with a documentation bug to
> resolve any compliance problems in a future patch by the documentation
> team.
>
> What do you think?

I have mixed feelings about this.  I have contributed documentation in
the past, and felt frustrated by the level of pickiness of the reviews -
to the extent of being somewhat demotivated about contributing more doc
improvements and additions.  So I think I understand where this
conversation is arising from.

On the other hand, firstly I like some of the pickiness, e.g. I don't
really want to see our docs littered with spelling mistakes - so it
might just be that I'm being subjective about what I think is good and
bad pickiness; and secondly I think we should acknowledge that this
isn't only a documentation issue: I've had 'could you also clean this up
while you're in the area' comments, and comments that seem to ask for
things just because they can, rather than being properly argued or
clearly beneficial, for code changes just as much as for docs.

On balance, though, and given that IMO we are still lacking a lot of
important OpenStack documentation (or documentation structure), I think
it would be good for documentation reviewers to adjust their bar down
slightly, so as to encourage more contributions.

I'm not sure if that needs the detailed solution proposed above; it
should be enough for the team to agree an adjusted approach among
themselves.

Regards,
Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] [packaging] How can Upper Constraints be used by packagers

2016-05-12 Thread Igor Yozhikov
Hello.

*Background: *

Linux packages like DEB/RPM have “Depends:” clauses, Currently packagers
use the global-requirements.txt (via requirements.txt) to come up with the
version range for this “Depends: || Requires:” clause.

*Example :*

[deb] http://noodle.portalus.net/debian/pool/main/n/nova/nova_13.0.0-2.dsc

[rpm]
https://github.com/openstack-packages/nova/blob/rpm-master/openstack-nova.spec#L378


According to proposed changes in G-R (
https://etherpad.openstack.org/p/newton-global-requirements)  related to
ranges/bounds I want to clarify situation for Linux packagers.

Very often packages for requirements mentioned in requirements.txt or
global-requirements  file are built using code versions set in lower
bounds. Usage of broader range for requirements will lead to complex
calculations of minimum version of requirement which will satisfy all of
projects which are using it. From perspective of packaging - must be only
one installed version of requirement in a system.

To avoid this complexity and provide co-installability, upper constraints
could be used as the source of minimal version for requirements in system
package.

Example(Mitaka):

Package python-nova with requirements according to global-requirements

(
https://github.com/openstack/requirements/blob/stable/mitaka/global-requirements.txt#L68
):

Depends:

 python-iso8601 (>= 0.1.9),

Package python-nova with requirements according to upper-constraints

(
https://github.com/openstack/requirements/blob/stable/mitaka/upper-constraints.txt#L153
):

Depends:
 python-iso8601 (>= 0.1.11),

Thanks,
Igor Yozhikov
Senior Deployment Engineer
at Mirantis 
skype: igor.yozhikov
cellular: +7 901 5331200
slack: iyozhikov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Port binding query

2016-05-12 Thread Neil Jerram
I'm trying Kuryr with networking-calico and think I've hit an unhelpful
inconsistency. A Neutron port has 'id' and 'device_id' fields that are
usually different. When Nova does VIF binding for a Neutron port, it
generates the Linux device name from 'tap' + port['id']. But when Kuryr
does VIF binding for a Neutron port, I think it generates the Linux device
name from 'tap' + port['device_id'].

Thoughts? Does that sound right, or have I misread the code and my logs? If
it's correct, it marginally impacts the ability to use identical agent and
Neutron driver/plugin code for the two cases (Nova and Kuryr).

Thanks,
Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Alex Schultz
On Thu, May 12, 2016 at 8:39 AM, Matthew Mosesohn 
wrote:

> Hi Alex,
>
> Collapsing our haproxy tasks makes it a bit trickier for plugin
> developers. We would still be able to control it via hiera, but it
> means more effort for a plugin developer to run haproxy for a given
> set of services, but explicitly exclude all those it doesn't intend to
> run on a custom role. Maybe you can think of some intermediate step
> that wouldn't add a burden to a plugin developer that would want to
> just proxy keystone and mysql, but not nova/neutron/glance/cinder?
>
>
So none of the existing logic has changed around the enabling/disabling of
those tasks within hiera.  The logic remains the same as I'm just including
the osnailyfacter::openstack_haproxy::openstack_haproxy_* classes[0] within
the haproxy task.  The only difference is that the task logic no longer
would control if something was included like sahara.

-Alex

[0]
https://review.openstack.org/#/c/307538/9/deployment/puppet/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp


> On Thu, May 12, 2016 at 5:34 PM, Alex Schultz 
> wrote:
> > Hey Fuelers,
> >
> > We have been using our own fork of the haproxy module within fuel-library
> > for some time. This also includes relying on a MOS specific version of
> > haproxy that carries the conf.d hack.  Unfortunately this has meant that
> > we've needed to leverage the MOS version of this package when deploying
> with
> > UCA.  As far as I can tell, there is no actual need to continue to do
> this
> > anymore. I have been working on switching to the upstream haproxy
> module[0]
> > so we can drop this custom haproxy package and leverage the upstream
> haproxy
> > module.
> >
> > In order to properly switch to the upstream haproxy module, we need to
> > collapse the haproxy tasks into a single task. With the migration to
> > leveraging classes for task functionality, this is pretty straight
> forward.
> > In my review I have left the old tasks still in place to make sure to not
> > break any previous dependencies but they old tasks no longer do anything.
> > The next step after this initial merge would be to cleanup the haproxy
> code
> > and extract it from the old openstack module.
> >
> > Please be aware that if you were relying on the conf.d method of
> injecting
> > configurations for haproxy, this will break you. Please speak up now so
> we
> > can figure out an alternative solution.
> >
> > Thanks,
> > -Alex
> >
> >
> > [0] https://review.openstack.org/#/c/307538/
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Alex Schultz
Hey Fuelers,

We have been using our own fork of the haproxy module within fuel-library
for some time. This also includes relying on a MOS specific version of
haproxy that carries the conf.d hack.  Unfortunately this has meant that
we've needed to leverage the MOS version of this package when deploying
with UCA.  As far as I can tell, there is no actual need to continue to do
this anymore. I have been working on switching to the upstream haproxy
module[0] so we can drop this custom haproxy package and leverage the
upstream haproxy module.

In order to properly switch to the upstream haproxy module, we need to
collapse the haproxy tasks into a single task. With the migration to
leveraging classes for task functionality, this is pretty straight forward.
In my review I have left the old tasks still in place to make sure to not
break any previous dependencies but they old tasks no longer do anything.
The next step after this initial merge would be to cleanup the haproxy code
and extract it from the old openstack module.

Please be aware that if you were relying on the conf.d method of injecting
configurations for haproxy, this will break you. Please speak up now so we
can figure out an alternative solution.

Thanks,
-Alex


[0] https://review.openstack.org/#/c/307538/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] switch to upstream haproxy module

2016-05-12 Thread Matthew Mosesohn
Hi Alex,

Collapsing our haproxy tasks makes it a bit trickier for plugin
developers. We would still be able to control it via hiera, but it
means more effort for a plugin developer to run haproxy for a given
set of services, but explicitly exclude all those it doesn't intend to
run on a custom role. Maybe you can think of some intermediate step
that wouldn't add a burden to a plugin developer that would want to
just proxy keystone and mysql, but not nova/neutron/glance/cinder?

On Thu, May 12, 2016 at 5:34 PM, Alex Schultz  wrote:
> Hey Fuelers,
>
> We have been using our own fork of the haproxy module within fuel-library
> for some time. This also includes relying on a MOS specific version of
> haproxy that carries the conf.d hack.  Unfortunately this has meant that
> we've needed to leverage the MOS version of this package when deploying with
> UCA.  As far as I can tell, there is no actual need to continue to do this
> anymore. I have been working on switching to the upstream haproxy module[0]
> so we can drop this custom haproxy package and leverage the upstream haproxy
> module.
>
> In order to properly switch to the upstream haproxy module, we need to
> collapse the haproxy tasks into a single task. With the migration to
> leveraging classes for task functionality, this is pretty straight forward.
> In my review I have left the old tasks still in place to make sure to not
> break any previous dependencies but they old tasks no longer do anything.
> The next step after this initial merge would be to cleanup the haproxy code
> and extract it from the old openstack module.
>
> Please be aware that if you were relying on the conf.d method of injecting
> configurations for haproxy, this will break you. Please speak up now so we
> can figure out an alternative solution.
>
> Thanks,
> -Alex
>
>
> [0] https://review.openstack.org/#/c/307538/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-05-12 Thread Thomas Goirand
Hi,

If I may bring some insights from the distro's viewpoint...

On 04/21/2016 04:35 PM, Michael Krotscheck wrote:
> New: Xenial Build Nodes
> 
> As of two weeks ago, OpenStack’s Infrastructure is running a version of
> Node.js and npm more recent than what is available on Trusty LTS.
> Ultimately, we would like to converge this version on Node4 LTS, the
> release version maintained by the Node foundation. The easiest way to do
> this is to simply piggyback on Infra’s impending adoption of Xenial
> build nodes, though some work is required to ensure this transition goes
> smoothly.

While this is a nice intention, I'd like to remind folks that
historically, all JS and Node stuff have been maintained in Debian. So
the work to maintain packages are done in Sid. So best would be to make
sure the toolchain works there, as this is the way to go also for
getting stuff pushed to Ubuntu (ie: via Debian).

I'm hereby volunteering to help if we need JS or Node packaging to
happen. I haven't started yet working on that (like packaging Gulp, see
later in this message...) but I will, sooner or later.

As I understand, the way to package NPM stuff is to use npm2deb. Once we
have npm packages pushed as NodeJS package, they would later on be
aggregated by some tools. Fuel uses Gulp and RequireJS to do that. I'd
be nice if we were standardizing on some tooling, so that downstream
package maintainers wouldn't have to do the work multiple times. Has
this discussion already happened?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Jamie Hannaford
+1 for 1 and 3.


I'm not sure maintainability should discourage us from exposing information to 
the user through the client - we'll face the same maintenance burden as we 
currently do, and IMO it's our job as a team to ensure our docs are up-to-date. 
Any kind of input which touches the API should also live in the API docs, 
because that's in line with every other OpenStack service.


I don't think I've seen documentation exposed via the API before (#2). I think 
it's a lot of work too, and I don't see what benefit it provides.


Jamie



From: Hongbin Lu 
Sent: 11 May 2016 21:52
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap, 'labels' 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document 'labels'. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/...), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin


Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-12 Thread Olga Gusarenko
On Tue, May 10, 2016 at 12:40 AM, Matt Kassawara 
wrote:

> At each summit, I speak with a variety of developers from different
> projects about the apparent lack of contributions to the central
> documentation. At previous summits, the most common complaint involved
> using DocBook. After converting most of the documentation to RST, the most
> common complaint at the recent summit involves the review process,
> particularly the lengthy amount of time patches sit in the review queue
> with -1s for various "conventions" problems such as structure, formatting,
> grammar, spelling, etc. Unlike most OpenStack developers that focus on a
> particular project, the documentation team covers all projects and lacks
> the capacity to understand each one enough to contribute and maintain
> technically accurate documentation in a timely manner. However, covering
> all projects enables the documentation team to organize and present the
> documentation to various audiences, primarily operators and users, that
> consume OpenStack as a coherent product. In other words, the entire process
> relies on developers contributing to the central documentation. So, before
> developer frustrations drive some or all projects to move their
> documentation in-tree which which negatively impacts the goal of presenting
> a coherent product, I suggest establishing an agreement between developers
> and the documentation team regarding the review process.
>
> As much as the documentation team wants to present OpenStack as a coherent
> product, it contains many projects with different contribution processes.
> In some cases, individual developers prefer to contribute in unique ways.
> Thus, the conventional "one-size-fits-all" approach that the documentation
> team historically takes with reviewing contributions from developers yields
> various levels of frustration among projects and developers. I ran a
> potential solution by various developers during the recent summit and
> received enough positive feedback to discuss it with a larger audience. So,
> here goes...
>
> A project or individual developer decides the level of documentation team
> involvement with reviewing patches. The developer adds a WIP to the
> documentation patch while adding content to prevent premature reviews by
> the documentation team. Once the content achieves a sufficient level of
> technical accuracy, the developer removes the WIP and adds a comment in the
> review indicating of the following preferences:
>
> 1) The documentation team should review the patch for compliance with
> conventions (proper structure, format, grammar, spelling, etc.) and provide
> feedback to the developer who updates the patch.
> 2) The documentation team should modify the patch to make it compliant and
> ask the developer for a final review to prior to merging it.
> 3) The documentation team should only modify the patch to make it build
> (if necessary) and quickly merge it with a documentation bug to resolve any
> compliance problems in a future patch by the documentation team.
>
> What do you think?
>

+1 to the second option!

In most cases it's faster and easier just to patch your suggestions rather
than comment -> wait for the response -> check again. Just less talk, more
action...

Olga


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Olga Gusarenko

Technical Writer | Mirantis, Kharkiv | 38, Lenin av., Kharkiv, Ukraine
ogusare...@mirantis.com | skype: gusarenko.olga
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [OpenstackClient] deprecations

2016-05-12 Thread Loo, Ruby
Thanks Jim, for explaining why ironic doesn¹t have that governance tag yet.

I didn¹t (except now, need caffeine obviously) see the OpenStackClient tag 
added to the subject line and thought you had forgotten and so just fired off 
another ³cleaner² email about it. Oops. Apologies to everyone for the duplicate 
email.

‹ruby



On 2016-05-11, 11:46 AM, "Jim Rollenhagen"  wrote:

>On Wed, May 11, 2016 at 02:35:12PM +, Loo, Ruby wrote:
>> Hi ironic¹ers,
>> 
>> I thought we had decided that we would follow the standard deprecation
>> process [1], but I see that ironic isn¹t tagged with that [2].
>> Although we have documented guidelines wrt deprecations [3]. But I am
>> not sure we¹ve been good about sending out email about deprecations.
>> Does anyone know/remember what we decided?
>
>So, we do follow the process (fairly well, IMO). However, the piece
>we're waiting on to assert this tag is this:
>
>In addition, projects assert that:
>
>It uses an automated test to verify that configuration files are
>forward-compatible from release to release and that this policy is
>not accidentally broken (for example, a gating grenade test).
>
>Given we don't have gating upgrade tests yet, we cannot yet assert this
>tag.
>
>> And the whole reason I was looking into this was because we have some
>> openstackclient commands that we want to deprecate [4], and I wanted
>> to know what the process was for that. How long should/must we keep
>> those deprecated commands. Is this considered part of the ironic
>> client, or part of openstackclient which might have its own
>> deprecation policy.  (So maybe this part should be in a different
>> email thread but anyway.)
>
>That's a great question, and I'm not sure. Maybe OSC folks can comment
>their thoughts. Added their tag in the subject.
>
>In general, I think we should just follow the standard policy for this.
>
>// jim
>
>> 
>> ‹ruby
>> 
>> [1] 
>> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
>> [2] 
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n1913
>> [3] https://wiki.openstack.org/wiki/Ironic/Developer_guidelines#Deprecations
>> [4] https://review.openstack.org/#/c/284160
>> 
>> 
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][openstackclient] deprecation process

2016-05-12 Thread Loo, Ruby
Hi OpenStackClient folks,

Ironic is following the standard deprecation process [1]. We added an OSC 
plugin and realized that we didn’t get the commands quite right. This patch [2] 
adds the right commands and deprecates the wrong ones. My question is what the 
deprecation process might be. Since it is a plugin to OSC, should it follow 
OSC’s deprecation process and if so, what might that process be? Or since the 
commands are related to ironic, should it follow ironic’s deprecation process? 
In particular, I wanted to know how long should/must we support those 
deprecated commands.

For the user’s sake, it seems like it would make sense that all OSC (plugin or 
not, does the user know the difference?) commands follow the same deprecation 
policy.

I took a quick look and didn’t see anything documented about this, so I might 
have missed it.

What sez you?

—ruby

[1] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
[2] https://review.openstack.org/#/c/284160

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-12 Thread Andreas Jaeger
On 2016-05-12 15:39, Jeremy Stanley wrote:
> On 2016-05-12 07:33:35 -0600 (-0600), Matt Kassawara wrote:
> [...]
>> I'm also not a fan of option 3 because it trades one kind of technical debt
>> for another. However, one could argue that some (relevant) content is
>> better than no (or defunct) content. Interestingly, option 3 also reflects
>> what ultimately happens if projects decide to maintain all documentation in
>> their respective repositories. Easier for developers to contribute, but at
>> the expense of usability by our various audiences.
> 
> While not a frequent reviewer of changes to Docs team repos, I tend
> to agree that option 3 is just shuffling around (or even increasing)
> the overall pain.
> 
> For option 2 keep in mind that our current version of Gerrit allows
> you to make edits from its Web UI in your browser, so it may be
> almost as easy to correct trivial issues while you're reviewing
> instead of commenting on them.
> 

And that's what I'm sometimes doing if it's just some minor issues.
Edit, publish, summarize my changes - and then +2 ;). That's a really
nice thing of our WebUI,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-12 Thread Jeremy Stanley
On 2016-05-12 07:33:35 -0600 (-0600), Matt Kassawara wrote:
[...]
> I'm also not a fan of option 3 because it trades one kind of technical debt
> for another. However, one could argue that some (relevant) content is
> better than no (or defunct) content. Interestingly, option 3 also reflects
> what ultimately happens if projects decide to maintain all documentation in
> their respective repositories. Easier for developers to contribute, but at
> the expense of usability by our various audiences.

While not a frequent reviewer of changes to Docs team repos, I tend
to agree that option 3 is just shuffling around (or even increasing)
the overall pain.

For option 2 keep in mind that our current version of Gerrit allows
you to make edits from its Web UI in your browser, so it may be
almost as easy to correct trivial issues while you're reviewing
instead of commenting on them.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-12 Thread Matt Kassawara
I'm also not a fan of option 3 because it trades one kind of technical debt
for another. However, one could argue that some (relevant) content is
better than no (or defunct) content. Interestingly, option 3 also reflects
what ultimately happens if projects decide to maintain all documentation in
their respective repositories. Easier for developers to contribute, but at
the expense of usability by our various audiences.

On Thu, May 12, 2016 at 6:56 AM, Brian Curtin  wrote:

> On Thu, May 12, 2016 at 1:24 AM, Joseph Robinson
>  wrote:
> > Hi All, One reply inline:
> >
> > On 11/05/2016, 7:33 AM, "Lana Brindley" 
> wrote:
> >
> >>On 10/05/16 20:08, Julien Danjou wrote:
> >>> On Mon, May 09 2016, Matt Kassawara wrote:
> >>>
>  So, before developer frustrations drive some or all projects to move
>  their documentation in-tree which which negatively impacts the goal of
>  presenting a coherent product, I suggest establishing an agreement
>  between developers and the documentation team regarding the review
>  process.
> >>>
> >>> My 2c, but it's said all over the place that OpenStack is not a
> product,
> >>> but a framework. So perhaps the goal you're pursuing is not working
> >>> because it's not accessible by design?
> >>>
>  1) The documentation team should review the patch for compliance with
>  conventions (proper structure, format, grammar, spelling, etc.) and
> provide
>  feedback to the developer who updates the patch.
>  2) The documentation team should modify the patch to make it compliant
> and
>  ask the developer for a final review to prior to merging it.
>  3) The documentation team should only modify the patch to make it
> build (if
>  necessary) and quickly merge it with a documentation bug to resolve
> any
>  compliance problems in a future patch by the documentation team.
> >
> > I like the idea of options 2 and 3. Specifically though, I think Option 3
> > - merging content that builds, and checking out a bug to improve the
> > quality - can work in some cases. With a dedicated teams on several
> > guides, docs contributors would be able to pick up bugs right away -
> > that¹s my 2c.
>
> This is just enabling technical debt as process and ultimately hurts
> the users of the docs who end up with poorly worded or incorrect
> information by letting subpar -- but syntactically correct --
> documentation in. Even a dedicated team isn't going to get around to
> everything, and someone coming in later to pick up bugs to document is
> less likely to be the right person to convey the proper explanation
> and details than the one who implemented the work.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [infra] NPM Mirrors (IMPORTANT)

2016-05-12 Thread Michael Krotscheck
Thanks, Vitaly- I'll add it to the list of things to investigate. I suspect
it might be a cold cache, however trusting my gut when it comes to
javascript related things has burned me in the past.

Michael

On Thu, May 12, 2016 at 4:10 AM Vitaly Kramskikh 
wrote:

> Hi, Michael,
>
> I randomly get "error parsing json" for fuel-ui
>  project:
> http://paste.openstack.org/show/496871/. Got such errors 2 times out of
> 5.
>
> 2016-05-11 22:07 GMT+03:00 Michael Krotscheck :
>
>> Hello everyone!
>>
>> We've recently added NPM mirrors to our infrastructure, and are about to
>> turn them on. Before that happens, however, we'd like to get a sanity check
>> from impacted projects to make sure that we don't wedge your gate.
>>
>> If you are in charge of a project that invokes `npm install` during any
>> of its gate jobs, then please invoke the following commands at your project
>> root.
>>
>> echo "registry=http://mirror.dfw.rax.openstack.org/npm/; >> .npmrc
>> rm -rf ./node_modules/
>> rm -rf ~/.npm/
>> npm install
>>
>> If you encounter an error, put it in paste.openstack.org and reply to
>> this thread. If not, great! Delete the .npmrc file and go on your merry way.
>>
>> Have a great day!
>>
>> Michael
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-05-12 Thread Michael Krotscheck
Hello there, Anton-

In Mitaka, most of OpenStack's services have landed CORS support. While
it's not yet "Automagic" (i.e. it still requires some manual
configuration), we no longer have to rely on any server component to render
a user interface.

The big outstanding things we need to do for CORS support to be awesome, is
to make sure it lands in newer API's (like Freezer), and to figure out a
sane and idempotent way for our middleware to configure itself from the
trusted-dashboards list in keystone.

Michael

On Thu, May 12, 2016 at 12:35 AM Anton Zemlyanov 
wrote:

> Hi,
>
> I have a question on js-openstacklib. If it is intended for browsers,
> there will be lots of issues with cross-domain security policy, browser
> cannot just go to any REST resource it wants. There is either some
> server-side proxy required or cooperation of the all the REST services we
> want to talk to. How we want to handle the cross-domain stuff?
>
> Anton
> On Thu, Apr 21, 2016 at 5:35 PM, Michael Krotscheck 
> wrote:
>
>> This post contains the current working draft of the JavaScript roadmap
>> which myself and Beth Elwell will be working on in Newton. It’s a big list,
>> and we need help - Overall themes for this cycle are Consistency,
>> Interoperability, and engaging with the JavaScript community at large. Our
>> end goal is to build the foundations of a JavaScript ecosystem, which
>> permits the creation of entirely custom interfaces.
>>
>> Note: We are not trying to replace Horizon, we are aiming to help those
>> downstream who need something more than “Vanilla OpenStack”. If you'd like
>> to have a discussion on this point, I'd be happy to have that under a
>> different subject.
>>
>> Continue Development: ironic-webclient
>>
>> The ironic-webclient will release its first version during the Newton
>> cycle. We’re awfully close to having the basic set of features supported,
>> and with some excellent feedback from the OpenStack UX team, will also have
>> a sexy new user interface that’s currently in the review queue. Once this
>> work is complete, we will begin extracting common components into a new
>> project, named…
>>
>> New: js-openstacklib
>>
>> This new project will be incubated as a single, gate-tested JavaScript
>> API client library for the OpenStack API’s. Its audience is software
>> engineers who wish to build their own user interface using modern
>> javascript tools. As we cannot predict downstream use cases, special care
>> will be taken to ensure the project’s release artifacts can eventually
>> support both browser and server based applications.
>>
>> Philosophically, we will be taking a page from the python-openstackclient
>> book, and avoid creating a new project for each of OpenStack’s services. We
>> can make sure our release artifacts can be used piecemeal, however trying
>> to maintain code consistency across multiple different projects is a hard
>> lesson that others have already learned for us. Let’s not do that again.
>>
>> New: js-generator-openstack
>>
>> Yeoman is JavaScript’s equivalent of cookiecutter, providing a
>> scaffolding engine which can rapidly set up, and maintain, new projects.
>> Creating and maintaining a yeoman generator will be a critical part of
>> engaging with the JavaScript community, and can drive adoption and
>> consistency across OpenStack as well. Furthermore, it is sophisticated
>> enough that it could also support many things that exist in today’s Python
>> toolchain, such as dependency management, and common tooling maintenance.
>>
>> Development of the yeoman generator will draw in lessons learned from
>> OpenStack’s current UI Projects, including Fuel, StoryBoard, Ironic,
>> Horizon, Refstack, and Health Dashboard, and attempt to converge on common
>> practices across projects.
>>
>> New (exploration): js-npm-publish-xstatic
>>
>> This project aims to bridge the gap between our JavaScript projects, and
>> Horizon’s measured migration to AngularJS. We don’t believe in duplicating
>> work, so if it is feasible to publish our libraries in a way that Horizon
>> may consume (via the existing xstatic toolchain), then we certainly should
>> pursue that. The notable difference is that our own projects, such as
>> js-openstacklib, don’t have to go through the repackaging step that our
>> current xstatic packages do; thus, if it is possible for us to publish to
>> npm and to xstatic/pypi at the same time, that would be best.
>>
>> New: Xenial Build Nodes
>>
>> As of two weeks ago, OpenStack’s Infrastructure is running a version of
>> Node.js and npm more recent than what is available on Trusty LTS.
>> Ultimately, we would like to converge this version on Node4 LTS, the
>> release version maintained by the Node foundation. The easiest way to do
>> this is to simply piggyback on Infra’s impending adoption of Xenial build
>> nodes, though some work is required to ensure this transition goes smoothly.
>>
>> Maintain: 

  1   2   >