Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Haïkel
Started on removing some entries, I guess I have big cleanup to do RDO side.

H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Austin summit feature classification session recap

2016-05-06 Thread Matt Riedemann
On Thursday morning John Garbutt led a session on feature classification 
in Nova. The full etherpad is here [1].


We've had a concept of this in the Nova devref for awhile [2].

The goals of the session were to agree on understanding what this was 
trying to fix and figure out the plan for working on it.


The point of feature classification is to identify what features in Nova 
are incomplete. This can mean they aren't fully tested, documented, etc. 
The idea is to communicate to users and operators what works for their 
technology choices, e.g. which hypervisor they use, shared vs non-shared 
storage, etc.


We also want it as a way to identify the gaps in testing and 
documentation so we can work on closing those gaps. There are then 
levels of completeness applied to a feature or scenario:


* Incomplete, e.g. cells v2
* Experimental, e.g. cells v1
* Complete, e.g. attach a volume to a server instance
* Complete and required, e.g. create and destroy a server instance
* Deprecated, e.g. nova-network

We can also use feature classification as a means to identify things 
that need to be deprecated, e.g. agent builds.


We also talked about how best to present this information so it's 
understandable to mere mortals.


We have the (hypervisor) feature support matrix already [3]. That's 
useful when you're drilling down into the lower level features that each 
virt driver (and even architecture for a virt driver like libvirt, for 
example) supports, but it's hard to parse from a high level.


So we agreed that for feature classification we'd start out with some 
high-level use cases. For example, network function virtualization, 
high-performance computing, pets (legacy application workloads) vs 
cattle (dev/test) clouds, etc. This is sort of like the architecture 
design guide [4]. Then from those use cases we start filling out the 
features you'd want for each one and then get into their level of 
completeness.


For Newton, John wants to accomplish the following:

* Get the infrastructure in place for creating the document within Nova, 
sort of like what we have for the feature support matrix, i.e. docs 
built from an ini/json/yaml file.


* Identify the use case categories, e.g. NFV, HPC, etc.

* Break those down into feature categories, and classifications, based 
on the existing hypervisor support matrix and DefCore.


* Then start filling out the table.

John has an example prototype POC here [5]. Note that's built from the 
docs job and will probably be gone soon, so I have an image of the table 
here [6].


Future work will include:

* Populating links to existing test results which can be community infra 
gate/check jobs/tests or third party CI results.


* Adding Tempest test uuids per feature and then cross referencing the 
test uuids to recent test results to automatically calculate if a 
feature is working or not.


* Linking to docs for each category.

* Add warning log messages for any big gaps in testing and potentially 
propose deprecation for some features unless testing is added.


[1] https://etherpad.openstack.org/p/newton-nova-feature-classification
[2] http://docs.openstack.org/developer/nova/feature_classification.html
[3] http://docs.openstack.org/developer/nova/support-matrix.html
[4] http://docs.openstack.org/arch-design/
[5] 
http://docs-draft.openstack.org/19/264719/7/check/gate-nova-docs/890de6a//doc/build/html/feature_classification.html#prototype-feature-support-matrix

[6] http://imgur.com/4rxX9V7

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][osc] Austin Design summit summary on the future of Neutron client

2016-05-06 Thread Akihiro Motoki
n Austin we had a session on the future of neutron client and
discussed the CLI transition to OpenStack Client (OSC).
The session etherpad is found at [1].

* We checked the progress of OSC transition and it is good.
  Support of 11 resources which are targets of the initial effort.
  In Newton cycle, we will focus on achieving the feature parity to
the existing 'neutron' CLI.

* We agreed that OSC support for neutron advanced services will be
done via OSC plugin.
  BGP stuff (neutron-dynamic-routing) will be supported via OSC plugin as well.
  neutron-dynamic-routing needs to be added to the list at [2].
  Future official sub projects (possibly like sfc, l2gw) will be
handled in the same way.

* CLI support for new features should be implemented in OpenStack
Client (and openstacksdk).
  All should go to OSC. neutronclient CLI support is optional.
  Around the feature freeze, neutron and openstackclient team will
communicate more closely to coordinate a new release.

* python bindings in neutronclient:
  All features provided by the main neutron repo will be supported by
openstackclient and openstacksdk.
  python bindings in the python-neutronclient need to be added only if
an openstack service needs to use the bindings.
  (for example, get-me-a-network python binding is required by nova.)

* A discussion about where is an appropriate place for admin commands,
  OSC repo vs OSC plugin in the python-neutronclient repo.
  If admin commands are provided by OSC plugin, it will reduce the
number of commands that regular users will see.
  On the other hand, API permissions can be configured by the policy.
One option is to install the OSC plugin
  which provides admin commands if users want to use them.
  In my understanding, there is no actual consensus in the session.
  (Note that the similar discussion happend for nova OSC support in
the dev list after the summit. [3])

Thanks,
Akihiro

[1] https://etherpad.openstack.org/p/newton-neutron-future-neutron-client
[2] 
https://github.com/openstack/python-neutronclient/blob/master/doc/source/devref/transition_to_osc.rst#developer-guide
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093955.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-05-06 Thread Matt Riedemann

On 5/6/2016 1:37 PM, Nikhil Komawar wrote:

Thanks for sending this out Matt. I added a inline comment here.

On Thu, May 5, 2016 at 8:34 PM, Matt Riedemann
> wrote:

There are still a few design summit sessions from the summit that
I'll recap but I wanted to get the priorities session recap out as
early as possible. We held that session in the last slot on
Thursday. The full etherpad is here [1].

The first part of the session was mostly going over schedule milestones.

We already started Newton with a freeze on spec approvals for new
things since we already have a sizable backlog [2]. Now that we're
past the summit we can approve specs for new things again.

The full Newton release schedule for Nova is in this wiki [3].

These are the major dates from here on out:

* June 2: newton-1, non-priority spec approval freeze
* June 30: non-priority feature freeze
* July 15: newton-2
* July 19-21: Nova Midcycle
* Aug 4: priority spec approval freeze
* Sept 2: newton-3, final python-novaclient release, FeatureFreeze,
Soft StringFreeze
* Sept 16: RC1 and Hard StringFreeze
* Oct 7, 2016: Newton Release

The important thing for most people right now is we have exactly
four weeks until the non-priority spec approval freeze. We then have
about one month after that to land all non-priority blueprints.

Keep in mind that we've already got 52 approved blueprints and most
of those were re-approved from Mitaka, so have been approved for
several weeks already.

The non-priority blueprint cycle is intentionally restricted in
Newton because of all of the backlog work we've had spilling over
into this release. We really need to focus on getting as much of
that done as possible before taking on more new work.

For the rest of the priorities session we talked about what our
actual review priorities are for Newton. The list with details and
owners is already available here [4].

In no particular order, these are the review priorities:

* Cells v2
* Scheduler
* API Improvements
* os-vif integration
* libvirt storage pools (for live migration)
* Get Me a Network
* Glance v2 Integration


I saw the priorities review ( https://review.openstack.org/#/c/312217/ )
has been merged so wanted to point that out here. I know Nova team cares
about history section of the specs so the dates are more clear from the
links posted on the comment. To be more explicit: Glance v2 work
(BP+code) was initially proposed in Icehouse, the co-located mid-cycle
was in Kilo where we had a brief session on Glance v2 work (& thanks to
all the Nova members who have been giving their input). Also, the reason
for my comments/questions in the etherpad.



We *should* be able to knock out glance v2, get-me-a-network and
os-vif relatively soon (I'm thinking sometime in June).

Not listed in [4] but something we talked about was volume
multi-attach with Cinder. We said this was going to be a 'stretch
goal' contingent on making decent progress on that item by
non-priority feature freeze *and* we get the above three smaller
priority items completed.

Another thing we talked about but isn't going to be a priority is
NFV-related work. We talked about cleaning up technical debt and
additional testing for NFV but had no one in the session signed up
to own that work or with concrete proposals on how to make
improvements in that area. Since we can't assign review priorities
to something that nebulous it was left out. Having said that, Moshe
Levi has volunteered to restart and lead the SR-IOV/PCI bi-weekly
meeting [5] (thanks again, Moshe!). So if you (or your employer, or
your vendor) are interested in working on NFV in Nova please attend
that meeting and get involved in helping out that subteam.

[1] https://etherpad.openstack.org/p/newton-nova-summit-priorities
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090370.html
[3] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
[4]

https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html
[5]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093541.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [vitrage] [congress] Vitrage-Congress Collaboration

2016-05-06 Thread Tim Hinrichs
Hi Alexey,

Thanks for the overview of how you see a Congress-Vitrage integration being
valuable.

I'd imagine that the right first step in this integration would be creating
a new datasource driver within Congress to pull data from Vitrage.  It
doesn't need to pull all the data in your list to start, but enough so that
we can try writing policy over that data.  It's helpful to have a policy in
mind that you want to write and then set up the datasource driver to grab
enough of the Vitrage data to write that policy.  Here are the relevant
docs.

Datasource drivers
http://docs.openstack.org/developer/congress/cloudservices.html

Writing policy
http://docs.openstack.org/developer/congress/policy.html

Let me know if you have any questions,
Tim



On Wed, May 4, 2016 at 11:51 PM Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> Hi to all Vitrage and Congress contributors,
>
> We had a good introduction meeting in Austin and we (Vitrage) think that
> we can have a good collaboration between the projects.
>
> Vitrage, as an Openstack Root Cause Analysis (RCA) Engine, builds a
> topology graph of all the entities in the system (physical, virtual and
> application) from different datasources. It thus can enrich Congress by
> providing more data about what is happening in the system. Additionally,
> the Vitrage RCA and deduce alarms & states mechanism can enhance the
> visibility of faults and how they inter-relate.  By using this information
> Congress could then execute different policies and perform more accurate
> actions.
>
> Another good property of Vitrage is that it can receive data also from
> non-openstack sources, like Nagios, which monitor the physical resources,
> including Switches (which are not modeled today in OpenStack).
>
> There are many ways in which Congress-Vitrage combination would be
> helpful. To take just one example:
> a. If a physical Switch is down, Vitrage can raise deduced alarms on the
> connected hosts and on the virtual machines affected by this change in
> switch state.
> b. Congress will then be notified by Vitrage about these alarms, which can
> set off Congress policies of migration.
> c. Furthermore, due to the RCA functionality, Congress will be aware that
> the Switch error is the source of the problem, and can determine the best
> place to create new instances of the VMs so that this  switch fault will
> not impact the new instances.
>
> As you can see, for each fault, we can use Vitrage to link it to other
> faults, and create alarms to reflect them. This is all done via Vitrage
> Templates, so the system is configurable to the needs of the user. Thus
> many more cases such as the example above could be thought of.
>
> To summarize, Vitrage can enrich Congress with the following four features:
> a. RCA
> b. Deduced alarms
> c. Physical, virtual and application layers
> d. Graph structure and topology of the system that defines the connections
> and relationships between all entities on which we can run quick graph
> algorithms to decide different actions to perform
>
> If you can think of additional use cases that can be used here, please
> share ☺
>
> For more data about Vitrage and its insights please take a look here:
> https://wiki.openstack.org/wiki/Vitrage
>
> Best Regards,
> Alexey Weyl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Jeremy Stanley
On 2016-05-06 17:18:52 -0400 (-0400), Matthew Treinish wrote:
> On Fri, May 06, 2016 at 04:40:28PM -0400, Davanum Srinivas wrote:
[...]
> > Example feedparser is in a few projects, but not used in those
> > projects, so we can file reviews to clean up entries in specific
> > projects (except openstack-health where it is definitely used), and
> 
> FWIW, openstack-health doesn't subscribe to global-requirements,
> there isn't any reason to, it's a standalone project so
> co-installability with openstack services isn't necessary. So,
> feel free to drop it from g-r if nothing else is using it.

Yep, in general if codesearch turns up use in a repo which isn't
mentioned in
http://git.openstack.org/cgit/openstack/requirements/tree/projects.txt
then the cruft requirement is probably still a candidate for
cleanup.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-06 Thread Joshua Harlow

Dmitry Tantsur wrote:

On 05/03/2016 11:24 PM, Joshua Harlow wrote:

Howdy folks,

So I meet up with *some* of the mistral folks during friday last week at
the summit and I was wondering if we as a group can find a path to help
that project move forward in their desire to have some kind of process
than ack (vs the existing ack then process) in there usage of the
messaging layer.

I got to learn that the following exists in mistral (sad-face):

https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38


And it got me thinking about how/if we can as a group possibly allow a
variant of https://review.openstack.org/#/c/229186/ to get worked on and
merged in and release so that the above 'hack' can be removed.


Hey, lemme weigh in from ironic-inspector PoV.

As you maybe remember, we also need a queue with possibility of both
ways of ack'ing for our HA work. So something like this patch doesn't
seem to help us at all. We'll probably have to cargo-cult the mistral
approach.


U seem to be thinking about the queue as an implementation vs thinking 
about what API do u really need and then say backing that API by a queue 
(if u so want to).


Thus where https://review.openstack.org/#/c/260246/ comes into play here 
because it thinks about the API first and the impl second (and if u 
really want 2 impls, well they are at 
https://github.com/openstack/taskflow/tree/master/taskflow/jobs/backends 
but I'm avoiding trying to bring those into the picture, because the 
top-level API seems unclear here still).


I guess it goes back to the 'why are people trying to use a message 
queue as a work queue' when the semantics of these are different (and 
let's not get into why we use a message queue as an RPC layer while we 
are at it, ha).




Is it possible to have a manual ack feature? I.e. for the handler to
choose when to ack.



I also would like to come to some kind of understanding that we also
(mistral folks would hopefully help here) would remove this kind of
change in the future as the longer term goal (of something like
https://review.openstack.org/#/c/260246/) would progress.

Thoughts from folks (mistral and oslo)?

Anyway we can create a solution that works in the short term (allowing
for that hack to be removed) and working toward the longer term goal?

-Josh

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] What's Up, Doc? 6 May 2016

2016-05-06 Thread Matt Kassawara
One significant advantage of central documentation involves providing
content in a single location with consistent structure or format that best
serves the particular audience. Moving most or all documentation into
project trees essentially eliminates this advantage, leaving our audiences
with an impression that OpenStack consists of many loosely associated
projects rather than a coherent cloud computing solution. However, as a
contributor to a few other OpenStack projects who helps other developers
contribute to central documentation, I can understand some of the
frustrations with it. I prefer to resolve these frustrations and have some
ideas that I intend to float in separate thread, but if you don't think
that's possible, consider submitting a spec to change the primary purpose
of the central documentation team to simply managing links to content in
the project trees.

On Fri, May 6, 2016 at 10:03 AM, Ildikó Váncsa 
wrote:

> Hi Lana,
>
> Thanks for the summary, it's pretty good reading to catch up what happened
> recently.
>
> I have one question, I might missed a few entries, so please point me to
> the right document in this case. We had a docco session with the Telemetry
> team and we agreed on moving back the documentation snippets, like for
> instance the Install Guide, to the project trees is a really good step and
> we're very supportive. In this sense I would like to ask about the plans
> regarding the Admin guide. We have a chapter there, which is on one hand
> outdated and on the other hand would be better to move under the project
> trees as well. Is this plan/desire in line with your plans regarding that
> document?
>
> Thanks,
> /Ildikó
>
> > -Original Message-
> > From: Lana Brindley [mailto:openst...@lanabrindley.com]
> > Sent: May 06, 2016 08:13
> > To: enstack.org; OpenStack Development Mailing List;
> openstack-i...@lists.openstack.org
> > Subject: What's Up, Doc? 6 May 2016
> >
> > Hi everyone,
> >
> > I hope you all had a safe journey home from Summit, and are now fully
> recovered from all the excitement (and jetlag)! I'm really
> > pleased with the amount of progress we made this time around. We have a
> definitive set of goals for Newton, and I'm confident that
> > they're all moving us towards a much better docs suite overall. Of
> course, the biggest and most important work we have to do is to get
> > our Install Guide changes underway. I'm very excited to see the new
> method for documenting OpenStack installation, and can't wait
> > to see all our big tent projects contributing to docs in such a
> meaningful way. Thank you to everyone (in the room and online) who
> > contributed to the Install Guide discussion, and helped us move forward
> on this important project.
> >
> > In other news, I've written a wrapup of the Austin design summit on my
> blog, which you might be interested in:
> > http://lanabrindley.com/2016/05/05/openstack-newton-summit-docs-wrapup/
> >
> > == Progress towards Newton ==
> >
> > 152 days to go!
> >
> > Bugs closed so far: 61
> >
> > Because we have such a specific set of deliverables carved out for
> Newton, I've made them their own wiki page:
> > https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
> > Feel free to add more detail and cross things off as they are achieved
> throughout the release. I will also do my best to ensure it's kept
> > up to date for each newsletter.
> >
> > One of the first tasks we've started work on after Summit is moving the
> Ops and HA Guides out of their own repositories and into
> > openstack-manuals. As a result, those repositories are now frozen, and
> any work you want to do on those books should be in
> > openstack-manuals.
> >
> > We are almost ready to publish the new RST version of the Ops Guide,
> there's just a few cleanup edits going in now, so make sure you
> > have the right book, in the right repo from now on. This was our very
> last book remaining in DocBook XML, so the docs toolchain will
> > be removing DocBook XML support. See spec
> https://review.openstack.org/311698 for details.
> >
> > Another migration note is that the API reference content is moving from
> api-site to project specific repositories and api-site is now
> > frozen. For more detail, see Anne's email:
> http://lists.openstack.org/pipermail/openstack-docs/2016-May/008536.html
> >
> > == Mitaka wrapup ==
> >
> > We performed a Mitaka retrospective at Summit, notes are here:
> https://etherpad.openstack.org/p/austin-docs-mitakaretro
> >
> > In particular, I'd like to call out our hard working tools team Andreas
> and Christian, all our Speciality Team leads, and the Mitaka release
> > managers Brian and Olga. Well done on a very successful release,
> everyone :)
> >
> > Total bugs closed: 645
> >
> > == Site Stats ==
> >
> > Thanks to the lovely people at Foundation (thanks Allison!) I now have
> access to more stats than I could possibly guess what to do
> > with, and I'm hoping to be 

Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-06 Thread Joshua Harlow

So then let's all get onboard https://review.openstack.org/#/c/260246/?

I've yet to see what all these things called 'process-than-ack' not 
seemingly fit into that API in that review. IMHO most of what people are 
trying to fit into oslo.messaging here isn't really messages but are 
jobs to be completed that should *only* be acked when they are actually 
complete.


Which is in part what that review adds/does (extracts the job[1] part 
from taskflow so others can use it, without say taking in the rest of 
taskflow).


[1] http://docs.openstack.org/developer/taskflow/jobs.html

Dmitry Tantsur wrote:

On 05/04/2016 08:21 AM, Mehdi Abaakouk wrote:


Hi,


That said, I agree with Mehdi that *most* RPC calls throughout
OpenStack,
not being idempotent, should not use process-then-ack.


That why I think we must not call this RPC. And the new API should be
clear the expected idempotent of the application callbacks.


Thoughts from folks (mistral and oslo)?


Also, I was not at the Summit, should I conclude the Tooz+taskflow
approach (that ensure the idempotent of the application within the
library API) have not been accepted by mistral folks ?



Taskflow is pretty opinionated about the whole application design. We
can't use it in ironic-inspector, but we also need process-then-ack
semantics for our HA work.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Matthew Treinish
On Fri, May 06, 2016 at 04:40:28PM -0400, Davanum Srinivas wrote:
> Thanks Jeremy,
> 
> Since the following folks volunteered to help with requirements:
> Dirk Mueller (SUSE)
> Haïkel Guémar (RDO/CentOS)
> Igor Yozhikov (MOS)
> Alan Pevec (RDO)
> Tony Breeds (Rackspace)
> Ghe Rivero (HPE)
> 
> So Dirk, Haïkel, Igor, Alan, Tony, Ghe, Can you please do more
> research and file reviews where ever need to cleanup?
> 
> Example feedparser is in a few projects, but not used in those
> projects, so we can file reviews to clean up entries in specific
> projects (except openstack-health where it is definitely used), and

FWIW, openstack-health doesn't subscribe to global-requirements, there isn't
any reason to, it's a standalone project so co-installability with openstack
services isn't necessary. So, feel free to drop it from g-r if nothing else is
using it.

-Matt Treinish

> also file a review against g-r and u-c to clean up the entries in the
> requirements repo as well.
> 
> If anyone else wants to get involved in requirements, please join the
> fun as well.
> 
> Thanks,
> Dims
> 
> 
> On Fri, May 6, 2016 at 2:44 PM, Jeremy Stanley  wrote:
> > On 2016-05-06 13:38:43 -0500 (-0500), Brant Knudson wrote:
> >> python-ldap and ldappool are in keystone's setup.cfg:
> >> http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg#n24
> >
> > Yep, that script will miss things like optional dependencies
> > declared in setup.cfg or things installed directly from setup.py.
> > Also I'm pretty sure it predates our python-version-specific entries
> > so it may not be parsing those entirely correctly. Plugging
> > potential cruft into http://codesearch.openstack.org/ might help
> > rule out some possibilities. When I started trying to clean this up
> > a while back (2014 maybe?) I noticed at least a few where `git blame
> > global-requirements.txt` led me back to commit messages mentioning
> > corresponding changes to consuming projects which were still in
> > review (some for many months) or had been abandoned.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] roadmap proposal for puppet-ceph

2016-05-06 Thread Emilien Macchi
Hi,

Here's a roadmap proposal for puppet-ceph:

1) Release 1.0.0
1.0.0 will be the first release, we'll also create stable/hammer since
it was the release that was tested in our CI.
https://review.openstack.org/#/c/313687/
https://review.openstack.org/313677 (for release notes)

2) Support and Deploy Jewel by default
Jewel is the new LTS, let's install it by default in our CI, starting
from master.
https://review.openstack.org/#/c/313662/

3) Support Ubuntu Xenial
https://review.openstack.org/#/c/313644/

4) Enable Xenial jobs in gate (currently in experimental pipeline).


Feedback / comments are welcome, in Gerrit or via this thread.
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Davanum Srinivas
Thanks Jeremy,

Since the following folks volunteered to help with requirements:
Dirk Mueller (SUSE)
Haïkel Guémar (RDO/CentOS)
Igor Yozhikov (MOS)
Alan Pevec (RDO)
Tony Breeds (Rackspace)
Ghe Rivero (HPE)

So Dirk, Haïkel, Igor, Alan, Tony, Ghe, Can you please do more
research and file reviews where ever need to cleanup?

Example feedparser is in a few projects, but not used in those
projects, so we can file reviews to clean up entries in specific
projects (except openstack-health where it is definitely used), and
also file a review against g-r and u-c to clean up the entries in the
requirements repo as well.

If anyone else wants to get involved in requirements, please join the
fun as well.

Thanks,
Dims


On Fri, May 6, 2016 at 2:44 PM, Jeremy Stanley  wrote:
> On 2016-05-06 13:38:43 -0500 (-0500), Brant Knudson wrote:
>> python-ldap and ldappool are in keystone's setup.cfg:
>> http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg#n24
>
> Yep, that script will miss things like optional dependencies
> declared in setup.cfg or things installed directly from setup.py.
> Also I'm pretty sure it predates our python-version-specific entries
> so it may not be parsing those entirely correctly. Plugging
> potential cruft into http://codesearch.openstack.org/ might help
> rule out some possibilities. When I started trying to clean this up
> a while back (2014 maybe?) I noticed at least a few where `git blame
> global-requirements.txt` led me back to commit messages mentioning
> corresponding changes to consuming projects which were still in
> review (some for many months) or had been abandoned.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Diagnostics & troubleshooting design summit summary and next steps

2016-05-06 Thread Assaf Muller
It is my personal experience that unless I do my homework, design
summit discussions largely go over my head. I'd guess that most people
don't have time to research the topic of every design session they
intend to go to, so for the session I lead I decided to do the
unthinkable and present the context of the discussion [1] with a few
slides [2] (That part of the session took I think 3 minutes). I'd love
to get feedback particularly on that, if people found it useful we may
consider increasing adoption of that habit for the Barcelona design
summit.

The goal for the session was to achieve consensus on the very high
level topics: Do we want to do Neutron diagnostics in-tree and via the
API. I believe that goal was achieved, and the answer to both
questions is 'yes'.

Since there's been at least 4 RFEs submitted in this domain, the next
step is to try and converge on one and iterate on an API. For these
purposes we will be using Hynek's spec, under review here [3]. I was
approached by multiple people that are interested in assisting with
the implementation phase, please say so on the spec so that Hynek will
be able to add you as a contributor.

I foresee a few contention points, chief of which is the abstraction
level of the API and how best to present diagnostics information in a
way that is plugin agnostic. The trick will be to find an API that is
not specific to the reference implementation while still providing a
great user experience to the vast majority of OpenStack users.

A couple of projects in the domain were mentioned, specifically
Monasca and Steth. Contributors from these projects are highly
encouraged to review the spec.

[1] https://etherpad.openstack.org/p/newton-neutron-troubleshooting
[2] 
https://docs.google.com/presentation/d/1IBVZ6defUwhql4PEmnhy3fl9qWEQVy4iv_IR6pzkFKw/edit?usp=sharing
[3] https://review.openstack.org/#/c/308973/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ironic broken on Ubuntu

2016-05-06 Thread Jeff Peeler
On Thu, May 5, 2016 at 1:51 PM, Franck Barillaud  wrote:
> All the containers are started expect for 'ironic_inspector' and
> 'nova_compute_ironic'.  Looking at the
> 'kolla/docker/ironic/ironic-inspector' shows no contruct for Ubuntu.

Yes, it's a feature gap that is known:
https://bugs.launchpad.net/kolla/+bug/1565936

> Rebuilt successufully the ironic containers on centos. The question is how
> can I deploy them ? Using 'kolla-ansible deploy' does not support 'mixed'
> environments.

The "mixed" environment issue [1][2] is one reason why the containers
are turned off by default. For reference, to enable the service you
modify it in /etc/kolla/globals.yml (after copying it) [3]. I really
struggled to on how to produce a Kolla compatible environment for
Ironic to operate in. Kolla is currently looking to integrate with
Bifrost in order to utilize ironic's services - stay tuned.

[1] http://lists.openstack.org/pipermail/openstack/2015-September/013986.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073530.html
[3] 
https://github.com/openstack/kolla/blob/2e396fec9807d1bfdb7a51027e85257f7d53a991/etc/kolla/globals.yml#L114

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Fox, Kevin M
I was under the impression bifrost was 2 things, one, an installer/configurator 
of ironic in a stand alone mode, and two, a management tool for getting 
machines deployed without needing nova using ironic.

The first use case seems like it should just be handled by enhancing kolla's 
ironic container stuff to directly to handle the use case, doing things the 
kolla way. This seems much cleaner to me. Doing it at runtime looses most of 
the benefits of doing it in a container at all.

The second adds a lot of value I think, and thats what the bifrost container 
should be?

Thanks,
Kevin

From: Mooney, Sean K [sean.k.moo...@intel.com]
Sent: Friday, May 06, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.



From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: Friday, May 6, 2016 6:56 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)
[Mooney, Sean K] well if other want to do the work that ok with me too but I 
was planning on deploying bifrost
At home again anyway so I taught I  might as well try to automate the process 
while im at it.

From: "Mooney, Sean K" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.


What I'd do here is ignore the install playbook and duplicate what it installs. 
 We don't want to install at run time, we want to install at build time.  You 
weren't clear if that is what your doing.
[Mooney, Sean K] that is certainly an option but bifrost is an installer for 
ironic and its supporting service. Not using its installation scripts 
significantly reduces the value of
Integrating with bifrost vs fixing the existing ironic support in kolla and 
using that to provision the undercloud.

The reason we would ignore the install playbook is because it runs the 
services.  We need to run the services in a different way.  This will (as we 
discussed at ODS) be a fat container on the underlord cloud – which I guess is 
ok.  I'd recommend not using systemd, as that will break systemd systems badly. 
 Instead use a different init system, such as supervisord.
[Mooney, Sean K] if we don’t use the  bifrost install playbook then yes 
supervisord would be a good choice for the init system.
Looking at the official centos docker image https://hub.docker.com/_/centos/  
they do provided instruction for running systemd containers tough I have had 
issues with this in the past.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is 
not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
“bifros-postinstall”.

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic 

[openstack-dev] [UX] OpenStack UX core nomination

2016-05-06 Thread Kruithof Jr, Pieter
OpenStack Community,

I would like to nominate Lana Brindley as a core for the OpenStack UX project.

Lana has been very active in supporting ongoing documentation efforts for 
cross-project user experience initiatives including the OpenStack personas and 
the UX Checklist for evaluating GUIs.  In addition, the docs team has supported 
research efforts for the various projects.

Her nomination supports the goal of OpenStack UX to support cross-project 
initiatives.

Piet Kruithof

PTL, OpenStack UX project


Piet Kruithof

Sr User Experience Architect,
Intel Open Source Technology Group

Project Technical Lead (PTL)
OpenStack UX project
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross-project deployment tool meeting

2016-05-06 Thread Emilien Macchi
I started https://etherpad.openstack.org/p/deployment-tools-wg
Recent threads on openstack-dev shows that we need to work together
(xenial things, etc).

If you represent an OpenStack deployment tool (Chef, Ansible, etc),
please add your name and your TZ in the etherpad.
Also feel free to bring topics, so we can start a bi-monthly meeting
(maybe less/more?) and work together.

Thanks and enjoy WE.

On Fri, May 6, 2016 at 3:51 PM, Jesse Pretorius
 wrote:
> On 26 April 2016 at 17:54, Jan Klare  wrote:
>>
>>
>>  i just wanted to follow up on this session
>> (https://etherpad.openstack.org/p/newton-deployment-tools-discussion) were
>> we talked about a cross-project meeting for deployment tools. I would love
>> to see something like that happen and it would be great if we can find a
>> specific date (maybe monthly) to do something like that. If you are
>> interested in going to such a meeting, please reply to this mail with a
>> suggestion when you could join such a meeting.
>>
>> Cheers,
>> Jan (OpenStack Chef)
>
>
> Thanks Jan. I think once per month will be enough.
>
> I'm based in the UK and am reasonably flexible around times, although it is
> usually more productive if it can be held during my day rather than my
> evening.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-based-policy] Service Chain work with LBaaS/FWaaS

2016-05-06 Thread Sumit Naiksatam
Hi Yao, Responses inline.

Thanks,
~Sumit.

On Fri, May 6, 2016 at 12:32 AM, 姚威  wrote:
> Hi all,
>
> I know that GBP can work with neutron(ml2) by resource_mapping, and
> group/policy all work well.
> Assume that I have installed and enabled LBaaS and FWaaS,can I use service
> chain of gbp by `chain_mapping` or other plugins ?
>

Yes. You might want to take a look at the Austin summit video and the
accompanying document which describes all this. Both are available
here:
https://wiki.openstack.org/wiki/GroupBasedPolicy/Austin

> Another question. I use GBP and Cisco APIC as native driver, what is the GBP
> service chain work flow? Such as create a service spec/node and apply it to
> a rule.
>

I believe the answer is yes since the APIC driver is a driver, and the
API is the same for all the drivers.

> I have searched over Internet, less reference and discussion.
>
>
> Thanks
>
> Yao Wei
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Mooney, Sean K


From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: Friday, May 6, 2016 6:56 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)
[Mooney, Sean K] well if other want to do the work that ok with me too but I 
was planning on deploying bifrost
At home again anyway so I taught I  might as well try to automate the process 
while im at it.

From: "Mooney, Sean K" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.


What I'd do here is ignore the install playbook and duplicate what it installs. 
 We don't want to install at run time, we want to install at build time.  You 
weren't clear if that is what your doing.
[Mooney, Sean K] that is certainly an option but bifrost is an installer for 
ironic and its supporting service. Not using its installation scripts 
significantly reduces the value of
Integrating with bifrost vs fixing the existing ironic support in kolla and 
using that to provision the undercloud.

The reason we would ignore the install playbook is because it runs the 
services.  We need to run the services in a different way.  This will (as we 
discussed at ODS) be a fat container on the underlord cloud - which I guess is 
ok.  I'd recommend not using systemd, as that will break systemd systems badly. 
 Instead use a different init system, such as supervisord.
[Mooney, Sean K] if we don't use the  bifrost install playbook then yes 
supervisord would be a good choice for the init system.
Looking at the official centos docker image https://hub.docker.com/_/centos/  
they do provided instruction for running systemd containers tough I have had 
issues with this in the past.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit ...) without a running init system which 
is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
"bifros-postinstall".

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.   I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was 
intended - install the files.  This is kind of a mashup of your 1-3 ideas.  
Good thinking :)


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress  please let me know but 
currently I am leaning towards 

Re: [openstack-dev] [kolla] Kolla rpm distribution

2016-05-06 Thread Jeff Peeler
On Fri, May 6, 2016 at 2:54 AM,   wrote:
>
> Hi,
>
> One of our application would like to use Kolla as an upstream deployment
> tools. As the application may run in the environment without internet
> connections, we are trying to packaging Kolla as well as its requirements,
> such as jinja2, into rpm packages and deliver them along with the
> application. We would like to get some advises about:
> 1) Is it the right way to go for our application to build rpms for upstream
> python packages?

If you don't want to rely on pip, then you have to use RPM or some
other method of installing the requirements. It is on the to do list
to create a document to help people mirror content so that they can
build faster, though I'm not sure if it would fully cover no access to
the internet at all.

> 2) Is there any plan for Kolla project to implement rpm packaging. As we are
> working on that, I think we can do some contributions.

Kolla just consumes RPMs (and debs). Package availability would
obviously depend on what distribution you're using, but python-jinja2
seems to already be packaged for Fedora:
https://admin.fedoraproject.org/pkgdb/package/rpms/python-jinja2/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross-project deployment tool meeting

2016-05-06 Thread Jesse Pretorius
On 26 April 2016 at 17:54, Jan Klare  wrote:

>
>  i just wanted to follow up on this session (
> https://etherpad.openstack.org/p/newton-deployment-tools-discussion) were
> we talked about a cross-project meeting for deployment tools. I would love
> to see something like that happen and it would be great if we can find a
> specific date (maybe monthly) to do something like that. If you are
> interested in going to such a meeting, please reply to this mail with a
> suggestion when you could join such a meeting.
>
> Cheers,
> Jan (OpenStack Chef)
>

Thanks Jan. I think once per month will be enough.

I'm based in the UK and am reasonably flexible around times, although it is
usually more productive if it can be held during my day rather than my
evening.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] API features discoverability

2016-05-06 Thread D'Angelo, Scott
I don't think we actually should be moving all the extensions to core, just the 
ones that are supported by all vendors and fully vetted. In other words, we 
should be moving extensions to core based on the original intent of extensions.
That would mean that for backups we could continue to use 
/v2|3//extensions to determine backup support (and anything else 
that is not supported by all vendors, and therefore in core).
As to whether or not the admin disables extensions that are not support by the 
deployment, I believe that admin should be responsible for their own 
deployment's UX.
Perhaps Deepti's new API has a use here, but I think it's worth discussing 
whether we can get the desired functionality out of the extensions, and whether 
we should strive to use extensions the way they were originally intended.

Scott (scottda)


Ramakrishna, Deepti deepti.ramakrishna at intel.com 

Mon Apr 18 07:17:41 UTC 2016


Hi Michal,

This seemed like a good idea when I first read it. What more, the server code 
for extension listing [1]
 does not do any authorization, so it can be used for any logged in user.

However, I don't know if requiring the admin to manually disable an extension 
is practical. First, admins
 can always forget to do that. Second, even if they wanted to, it is not clear 
how they could disable specific
 extensions. I assume they would need to edit the cinder.conf file. This file 
currently lists the set of
 extensions to load as cinder.api.contrib.standard_extensions. The server code 
[2] implements this by walking
 the cinder/api/contrib directory and loading all discovered extensions. How is 
it possible to subtract just
one extension from the "standard extensions"? Also, system capabilities and 
extensions may not have a 1:1
 relationship in general.

Having a new extension API (as proposed by me in [3]) for returning the 
available services/functionality does
 not have the above problems. It will dynamically check the existence of the 
cinder-backup service, so it does
 not need manual action from admin. I have published a BP [4] related to this. 
Can you please comment on that?

Thanks,
Deepti

[1] 
https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L152
[2] 
https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L312
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077209.html
[4] https://review.openstack.org/#/c/306930/

-Original Message-
From: Michał Dulko [mailto:michal.dulko at 
intel.com]
Sent: Thursday, April 14, 2016 7:06 AM
To: OpenStack Development Mailing List (not for usage questions) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>
Subject: [openstack-dev] [Cinder] API features discoverability

Hi,

When looking at bug [1] I've thought that we could simply use 
/v2//extensions to signal features
 available in the deployment - in this case backups, as these are implemented 
as API extension too. Cloud admin
 can disable an extension if his cloud doesn't support a particular feature and 
this is easily discoverable using
aforementioned call. Looks like that solution weren't proposed when the bug was 
initially raised.

Now the problem is that we're actually planning to move all API extensions to 
the core API. Do we plan to keep this
 API for features discovery? How to approach API compatibility in this case if 
we want to change it? Do we have a plan
 for that?

We could keep this extensions API controlled from the cinder.conf, regardless 
of the fact that we've moved everything
 to the core, but that doesn't seem right (API will still be functional, even 
if administrator disables it in configuration,
 am I right?)

Anyone have thoughts on that?

Thanks,
Michal

[1] https://bugs.launchpad.net/cinder/+bug/1334856


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Austin Design Summit Recap

2016-05-06 Thread Sheel Rana Insaan
Dear Sean,

Great compilation, this will help for sure!!
Thank you!!

Best Regards,
Sheel Rana

On Fri, May 6, 2016 at 10:38 PM, Sean McGinnis 
wrote:

> At the Design Summit in Austin, the Cinder team met over three days
> to go over a variety of topics. This is a general summary of the
> notes captured from each session.
>
> We were also able to record most sessions. Please see the
> openstack-cinder YouTube channel for all its minute and tedious
> glory:
>
> https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ
>
> Replication Next Steps
> ==
> Replication v2.1 was added in Mitaka. This was a first step in supporting
> a simplified use case. A few drivers were able to implement support for
> this in Mitaka, with a few already in the queue for support in Newton.
>
> There is a desire to add the ability to replicate smaller groups of volumes
> and control them individually for failover, failback, etc. Eventually we
> would also want to expose this functionality to non-admin users. This will
> allow tenants to group their volumes by application workload or other user
> specific constraint and give them control over managing that workload.
>
> It was agreed that it is too soon to expose this at this point. We would
> first like to get broader vendor support for the current replication
> capabilities before we add anything more. We also want to improve the admin
> experience with handling full site failover. As it is today, there is a lot
> of manual work that the admin would need to do to be able to fully recover
> from a failover. There are ways we can make this experience better. So
> before
> we add additional things on top of replication, we want to make sure what
> we have is solid and at least slightly polished.
>
> Personally, I would like to see some work done with Nova or some third
> party
> entity like Smaug or other projects to be able to coordinate activities on
> the compute and storage sides in order to fail over an environment
> completely
> from a primary to secondary location.
>
> Related to the group replication (tiramisu) work was the idea of generic
> volume groups. Some sort of grouping mechanism would be required to tie in
> to that. We have a grouping today with consistency groups, but that has its
> own set of semantics and expectations that doesn't always fully mesh with
> what users would want for group replication.
>
> There have also been others looking at using consistency groups to enable
> vendor specific functionality not quite inline with the intent of what
> CGs are meant for.
>
> We plan on creating a new concept of a group that has a set of possible
> types.
> One of these types will be consistency, with the goal that internally we
> can
> shift things around to convert our current CG concept to be a group of type
> consistency while still keeping the API interface that users are used to
> for
> working with them.
>
> But beyond that we will be able to add things like a "replication" type
> that
> will allow users to group volumes, that may or not be able to be snapped in
> a IO order consistent manner, but that can be acted on as a group to be
> replicated. We can also expand this group type to other concepts moving
> forward to meet other use cases without needing to introduce a wholly new
> concept. The mechanisms for managing groups will already be in place and a
> new
> type will be able to be added using existing plumbing.
>
> Etherpad:
> https://etherpad.openstack.org/p/cinder-newton-replication
>
> Active/Active High Availability
> ===
> Work continues on HA. Gorka gave an overview of the work completed so far
> and
> the work left to do. We are still on the plan proposed at the Tokyo Summit,
> just a lot of work to get it all implemented. The biggest variations are
> around
> the host name used for the "clustered" service nodes and the idea that we
> will
> not attempt to do any sort of automatic cleanup for in-progress work that
> gets
> orphaned due to a node failure.
>
> Etherpad:
> https://etherpad.openstack.org/p/cinder-newton-activeactiveha
>
> Mitaka Recap
> 
> Two sessions were devoted to going over what had changed in Mitaka. There
> were
> a lot of things introduced that developers and code reviewers now need to
> be
> aware of, so we wanted to spend some time educating everyone on these
> things.
>
> Conditional DB Updates
> --
> To try to eliminate races (partly related to the HA work) we will now use
> conditional updates. This will eliminate the gap between checking a value
> in
> setting it, making it one atomic DB update. Better performance than locking
> around operations.
>
> Microversions
> -
> API microversions was implemented in Mitaka. The new /v3 endpoint should be
> used. Any change in the API should now be implemented as a micrversion
> bump.
> Devref in Cinder with details of how to use this and more detail as to when
> 

Re: [openstack-dev] [trove][sahara][infra][Octavia][manila] discussion of image building in Trove

2016-05-06 Thread Flavio Percoco

On 05/05/16 19:16 +, Amrith Kumar wrote:

Pete, please clarify … I was going to push the dib elements that we currently
have and you were writing CentOS elements. Is that right?



Seems like there are some crossed wires here.


Pete is out today so chiming in on his behalf for now. At the summit Pete signed
up for amending the current spec, include the DIB bits in there and to pull the
DIB elements out of trove-integration into the repo, which Pete himself agreed
to create as well.

I believe he's already started on this so, I'd prolly let him handle this as
agreed at the summit.

Thanks,
Flavio





-amrith



From: Victoria Martínez de la Cruz [mailto:victo...@vmartinezdelacruz.com]
Sent: Thursday, May 05, 2016 10:30 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [trove][sahara][infra][Octavia][manila] discussion
of image building in Trove



We agreed during the summit that we were going to amend the spec to reflect
latest discussions with regards to having DIB as a primary implementation and
adding support for libguestfs in parallel. The spec blueprint is named "Trove
image builder" and it's about building images and not about which tool we are
going to use. Thanks for creating the artifacts we need to push the code, we
take it over from there.



2016-05-05 11:12 GMT-03:00 Amrith Kumar :




   From: Victoria Martínez de la Cruz [mailto:victo...@vmartinezdelacruz.com]
   Sent: Thursday, May 05, 2016 9:00 AM
   To: OpenStack Development Mailing List (not for usage questions) <
   openstack-dev@lists.openstack.org>
   Subject: Re: [openstack-dev] [trove][sahara][infra][Octavia][manila]
   discussion of image building in Trove




   Hi all,




   A few things:




   - I agree that moving from DIB to libguestfs is a bold move and that we
   should try to avoid changing tools unless highly necessary. The downsides
   we found for DIB are detailed in this spec [0] and Ethan (in this same
   thread) also added valid points on the Sahara case. My concern here is,
   should we stick with DIB just because is the standard for image creation?
   Shouldn't we take in consideration that some projects, like Sahara, are
   moving away from it?

   - In the long term it would be ideal that we reach to a common solution for
   image creation for all the projects that need tailored images: Trove,
   Sahara, Octavia, Manila, and IIRC, Kolla and Cue.

   - In the short term, I'm on board or working on having tools based on DIB
   for image creation in Trove.

   - Amrith, Pete is working on the image creation process for Trove. The spec
   is up there [0]. I think is his work to kick-off that repository.

   [amrith] The spec [0] referenced is entitled “Separate trove image build
   project based on libguestfs tools”. I am working on image building using
   the existing DIB elements that are already part of trove-integration. In
   any event, please see line 220 of [0] for a detailed explanation of why I
   am making the repository.




   Best,




   Victoria




   [0] https://review.openstack.org/#/c/295274/




   2016-05-04 23:20 GMT-03:00 Amrith Kumar :

   As we discussed at summit, (and consistent with all of the comments) we
   should move ahead with the project to advance the image builder for
   Trove and make it easier to build guest images for Trove by leveraging
   the DIB elements that we have in trove-integration.




   To that end, the infra [1] and governance [2] changes have been
   submitted for review. The Launchpad tracker [3] has been registered.




   I am working on taking the existing DIB elements in trove-integration
   and putting them in the new repository (openstack/trove-image-builder).
   I am also going to continue to watch this conversation and record any
   shortcomings with the existing DIB elements in Launchpad [3] and work
   on fixing those as well. Pete mentions one in his earlier email and
   I’ve logged that in Launchpad [4].




   Thanks,




   -amrith




   [1] https://review.openstack.org/#/c/312805/

   [2] https://review.openstack.org/#/c/312806/

   [3] https://launchpad.net/trove-image-builder

   [4] https://bugs.launchpad.net/trove-image-builder/+bug/1578454








   From: Mariam John [mailto:mari...@us.ibm.com]
   Sent: Wednesday, May 04, 2016 4:19 PM
   To: OpenStack Development Mailing List (not for usage questions) <
   openstack-dev@lists.openstack.org>

  
   Subject: Re: [openstack-dev] [trove][sahara][infra][Octavia][manila]

   discussion of image building in Trove




   The way I see this, these are the 2 main concerns I have been hearing
   regarding image building in Trove:
   1) making the process simple 

Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Jeremy Stanley
On 2016-05-06 13:38:43 -0500 (-0500), Brant Knudson wrote:
> python-ldap and ldappool are in keystone's setup.cfg:
> http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg#n24

Yep, that script will miss things like optional dependencies
declared in setup.cfg or things installed directly from setup.py.
Also I'm pretty sure it predates our python-version-specific entries
so it may not be parsing those entirely correctly. Plugging
potential cruft into http://codesearch.openstack.org/ might help
rule out some possibilities. When I started trying to clean this up
a while back (2014 maybe?) I noticed at least a few where `git blame
global-requirements.txt` led me back to commit messages mentioning
corresponding changes to consuming projects which were still in
review (some for many months) or had been abandoned.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Steve Martinelli
python-ldap is definitely used by keystone. I think expanding your search
to include setup.cfg in addition to req.txt and test-req.txt would catch
this case.
ldap is optional for keystone and we use setuptool's optional dependencies
expose it. i'll update the etherpad.

On Fri, May 6, 2016 at 2:20 PM, Davanum Srinivas  wrote:

> Folks,
>
> Thanks to Jeremy for pointing to [1]. Please see list below with
> things that are considered cruft as they don't seem to appear in
> requirements/test-requirements in projects. Some of them are clearly
> needed by us :) like libvirt-python. Others are questionable. Example
> sockjs-tornado added for Horizon ended up not being used AFAICT.
>
> Please add notes in etherpad if anyone has an idea if these are needed or
> not:
> https://etherpad.openstack.org/p/requirements-cruft
>
> Thanks,
> Dims
>
> [1]
> http://git.openstack.org/cgit/openstack/requirements/tree/tools/cruft.sh
> [2] https://review.openstack.org/#/q/topic:bp/sparklines,n,z
>
>
> ==
> XStatic-Angular-FileUpload
> XStatic-JQuery.Bootstrap.Wizard
> XStatic-Magic-Search
> XStatic-QUnit
> XenAPI
> aodhclient
> argcomplete
> botocore
> ceilometermiddleware
> dcos
> django-bootstrap-form
> extras
> fairy-slipper
> feedparser
> hgtools
> influxdb
> ironic-discoverd
> ldappool
> libvirt-python
> mimic
> netmiko
> notifier
> os-apply-config
> os-cloud-config
> os-net-config
> os-refresh-config
> posix_ipc
> pyghmi
> pylxd
> pysqlite;python_version
> python-consul
> python-ldap
> python-solumclient
> requestsexceptions
> singledispatch
> sockjs-tornado
> sphinxcontrib-blockdiag
> tripleo-image-elements
> weakrefmethod;python_version
> xmltodict
> ==
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-05-06 Thread Nikhil Komawar
Thanks for sending this out Matt. I added a inline comment here.

On Thu, May 5, 2016 at 8:34 PM, Matt Riedemann 
wrote:

> There are still a few design summit sessions from the summit that I'll
> recap but I wanted to get the priorities session recap out as early as
> possible. We held that session in the last slot on Thursday. The full
> etherpad is here [1].
>
> The first part of the session was mostly going over schedule milestones.
>
> We already started Newton with a freeze on spec approvals for new things
> since we already have a sizable backlog [2]. Now that we're past the summit
> we can approve specs for new things again.
>
> The full Newton release schedule for Nova is in this wiki [3].
>
> These are the major dates from here on out:
>
> * June 2: newton-1, non-priority spec approval freeze
> * June 30: non-priority feature freeze
> * July 15: newton-2
> * July 19-21: Nova Midcycle
> * Aug 4: priority spec approval freeze
> * Sept 2: newton-3, final python-novaclient release, FeatureFreeze, Soft
> StringFreeze
> * Sept 16: RC1 and Hard StringFreeze
> * Oct 7, 2016: Newton Release
>
> The important thing for most people right now is we have exactly four
> weeks until the non-priority spec approval freeze. We then have about one
> month after that to land all non-priority blueprints.
>
> Keep in mind that we've already got 52 approved blueprints and most of
> those were re-approved from Mitaka, so have been approved for several weeks
> already.
>
> The non-priority blueprint cycle is intentionally restricted in Newton
> because of all of the backlog work we've had spilling over into this
> release. We really need to focus on getting as much of that done as
> possible before taking on more new work.
>
> For the rest of the priorities session we talked about what our actual
> review priorities are for Newton. The list with details and owners is
> already available here [4].
>
> In no particular order, these are the review priorities:
>
> * Cells v2
> * Scheduler
> * API Improvements
> * os-vif integration
> * libvirt storage pools (for live migration)
> * Get Me a Network
> * Glance v2 Integration
>

I saw the priorities review ( https://review.openstack.org/#/c/312217/ )
has been merged so wanted to point that out here. I know Nova team cares
about history section of the specs so the dates are more clear from the
links posted on the comment. To be more explicit: Glance v2 work (BP+code)
was initially proposed in Icehouse, the co-located mid-cycle was in Kilo
where we had a brief session on Glance v2 work (& thanks to all the Nova
members who have been giving their input). Also, the reason for my
comments/questions in the etherpad.


>
> We *should* be able to knock out glance v2, get-me-a-network and os-vif
> relatively soon (I'm thinking sometime in June).
>
> Not listed in [4] but something we talked about was volume multi-attach
> with Cinder. We said this was going to be a 'stretch goal' contingent on
> making decent progress on that item by non-priority feature freeze *and* we
> get the above three smaller priority items completed.
>
> Another thing we talked about but isn't going to be a priority is
> NFV-related work. We talked about cleaning up technical debt and additional
> testing for NFV but had no one in the session signed up to own that work or
> with concrete proposals on how to make improvements in that area. Since we
> can't assign review priorities to something that nebulous it was left out.
> Having said that, Moshe Levi has volunteered to restart and lead the
> SR-IOV/PCI bi-weekly meeting [5] (thanks again, Moshe!). So if you (or your
> employer, or your vendor) are interested in working on NFV in Nova please
> attend that meeting and get involved in helping out that subteam.
>
> [1] https://etherpad.openstack.org/p/newton-nova-summit-priorities
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090370.html
> [3] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
> [4]
> https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html
> [5]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/093541.html
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Brant Knudson
On Fri, May 6, 2016 at 1:20 PM, Davanum Srinivas  wrote:

> Folks,
>
> Thanks to Jeremy for pointing to [1]. Please see list below with
> things that are considered cruft as they don't seem to appear in
> requirements/test-requirements in projects. Some of them are clearly
> needed by us :) like libvirt-python. Others are questionable. Example
> sockjs-tornado added for Horizon ended up not being used AFAICT.
>
> Please add notes in etherpad if anyone has an idea if these are needed or
> not:
> https://etherpad.openstack.org/p/requirements-cruft
>
> Thanks,
> Dims
>
> [1]
> http://git.openstack.org/cgit/openstack/requirements/tree/tools/cruft.sh
> [2] https://review.openstack.org/#/q/topic:bp/sparklines,n,z
>
>
> ==
> XStatic-Angular-FileUpload
> XStatic-JQuery.Bootstrap.Wizard
> XStatic-Magic-Search
> XStatic-QUnit
> XenAPI
> aodhclient
> argcomplete
> botocore
> ceilometermiddleware
> dcos
> django-bootstrap-form
> extras
> fairy-slipper
> feedparser
> hgtools
> influxdb
> ironic-discoverd
> ldappool
> libvirt-python
> mimic
> netmiko
> notifier
> os-apply-config
> os-cloud-config
> os-net-config
> os-refresh-config
> posix_ipc
> pyghmi
> pylxd
> pysqlite;python_version
> python-consul
> python-ldap
> python-solumclient
> requestsexceptions
> singledispatch
> sockjs-tornado
> sphinxcontrib-blockdiag
> tripleo-image-elements
> weakrefmethod;python_version
> xmltodict
> ==
>
>
python-ldap and ldappool are in keystone's setup.cfg:
http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg#n24

-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Fox, Kevin M
Another option, should the install playbook be enhanced to support simply 
skipping the steps that wouldn't apply to building in the container?

Seems to me, all the ironic stuff could just be done with the kolla ironic 
container, so no systemd stuff should be needed.

Thanks,
Kevin

From: Mooney, Sean K [sean.k.moo...@intel.com]
Sent: Friday, May 06, 2016 10:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is 
not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
“bifros-postinstall”.

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.   I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress  please let me know but 
currently I am leaning towards option 2.

The only other option I see would be to not use a container and either install 
biforst on the host or in a vm.
These would essentially be a no op for kolla as we would simply have to 
document how to install bifrost which is covered
Quite well as part of the bifrost project.

Regards
Sean.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Angular form framework

2016-05-06 Thread Tripp, Travis S
Yes, it is angular specific. If there is something that can work across all 
frameworks, then that would be good to know.

From: Michael Krotscheck >
Reply-To: OpenStack List 
>
Date: Thursday, May 5, 2016 at 8:51 AM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [horizon] Angular form framework

This feels like a thing for AngularJS projects only, yes? What about projects 
like Fuel that use React?

Michael

On Wed, May 4, 2016 at 9:00 PM Tripp, Travis S 
> wrote:
Hello everybody,

I sent a message about this direclty to a couple of people for their quick 
thoughts. It looks like there is enough interest that I should have just sent 
it to the whole ML from the start. I’d like to keep folks in the loop, so, I’m 
copying it all below with all of the responses To date.

Thanks,
Travis

From: "Borland, Matt" 
>>

That sounds great, it would be good to use something we don’t have to maintain. 
 Timur, thanks for researching that!

Matt

From: Timur Sufiev [mailto:tsuf...@mirantis.com]

Folks,

I was going to research this library as the possible prerequisite to 
angularization murano-dashboard 'dynamic ui' feature. So i'm planning to start 
next week looking into it.

On Mon, 2 May 2016 at 20:21, Thai Q Tran 
>>
 wrote:
I think that it will remove a lot of boilerplate HTML, and is much more 
extensible than the current way of creating forms. But we may need to extend 
the directive to do more things.

The options they provide does not cover the 2 cases that I brought up. 
hz-password and hz-if-* are both directives.
For example: 

This says that if some_rules passes, then show this input, otherwise, hide it. 
Essentially, what we need is the ability to inject additional attrs into each 
form field so that we can include our own directives. If we can somehow extend 
ngSchemaForm to support this, it should work.

Alternately, we can do the policy check in javascript instead. It just means we 
have to use the services directly rather than their directive counterparts 
(most of the directives we have are backed by a service, i.e. hz-if-policy uses 
the policy service). It's less nice but should also work.

Ultimately, I think going this direction is right, as the extensible benefits 
outweighs the declarative readability. There is still a separation of concerns, 
the forms can be declare like how we declare actions today (in a service that 
we can extend).

- Original message -
From: "Rob Cresswell (rcresswe)" 
>>

I’m a pretty big fan of this idea, I’ve mentioned it at basically every meet up 
we’ve had. Building up content like this is a great way of preventing 
duplication.

Thai, the forms can take specific conditions to control their display: 
https://github.com/json-schema-form/angular-schema-form/blob/master/docs/index.md#standard-options
 as well as custom form fields, so it looks like that solves both of your 
issues?

Rob

On 27 Apr 2016, at 11:44, 
tsuf...@mirantis.com>
 wrote:

I recall mentioning model-directed generation of forms layout (since they are 
pretty verbose) at Hillsboro midcycle, the response was that 'mixing 
logic/model and presentation is not the best pattern'.

On Wed, Apr 27, 2016 at 7:41 PM Thai Q Tran 
>>
 wrote:
Looks interesting, but I'm not too sure it will work for all cases. I can think 
of a few cases where this might not work.

Case 1. We have custom directives that modify existing input forms such as 
hz-password. Not sure how we will be able to incorporate it if we use an 
auto-generated form.

Case 2. We have things like hz-if that we may use to control which form fields 
to show. Again, not sure how this will work if we are auto-generating the form. 
I suppose you would have to do the check in the controller and modify the JSON 
to reflect this. But that will make it less declarative.

- Original message -
From: "Tripp, Travis S" 
>>

Alex Tivelkov at Mirantis mentioned this to me.  Has anybody looked at this to 
see if it is something we might want to incorporate. He said it allows using 
JSON schema definitions to generate forms.  As FYI, the Metadata Definitions in 
Glance are in 

[openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Davanum Srinivas
Folks,

Thanks to Jeremy for pointing to [1]. Please see list below with
things that are considered cruft as they don't seem to appear in
requirements/test-requirements in projects. Some of them are clearly
needed by us :) like libvirt-python. Others are questionable. Example
sockjs-tornado added for Horizon ended up not being used AFAICT.

Please add notes in etherpad if anyone has an idea if these are needed or not:
https://etherpad.openstack.org/p/requirements-cruft

Thanks,
Dims

[1] http://git.openstack.org/cgit/openstack/requirements/tree/tools/cruft.sh
[2] https://review.openstack.org/#/q/topic:bp/sparklines,n,z


==
XStatic-Angular-FileUpload
XStatic-JQuery.Bootstrap.Wizard
XStatic-Magic-Search
XStatic-QUnit
XenAPI
aodhclient
argcomplete
botocore
ceilometermiddleware
dcos
django-bootstrap-form
extras
fairy-slipper
feedparser
hgtools
influxdb
ironic-discoverd
ldappool
libvirt-python
mimic
netmiko
notifier
os-apply-config
os-cloud-config
os-net-config
os-refresh-config
posix_ipc
pyghmi
pylxd
pysqlite;python_version
python-consul
python-ldap
python-solumclient
requestsexceptions
singledispatch
sockjs-tornado
sphinxcontrib-blockdiag
tripleo-image-elements
weakrefmethod;python_version
xmltodict
==

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Steven Dake (stdake)
Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)

From: "Mooney, Sean K" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.


What I'd do here is ignore the install playbook and duplicate what it installs. 
 We don't want to install at run time, we want to install at build time.  You 
weren't clear if that is what your doing.

The reason we would ignore the install playbook is because it runs the 
services.  We need to run the services in a different way.  This will (as we 
discussed at ODS) be a fat container on the underlord cloud – which I guess is 
ok.  I'd recommend not using systemd, as that will break systemd systems badly. 
 Instead use a different init system, such as supervisord.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is 
not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
“bifros-postinstall”.

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.   I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was 
intended – install the files.  This is kind of a mashup of your 1-3 ideas.  
Good thinking :)


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress  please let me know but 
currently I am leaning towards option 2.


If you have questions about my suggestion to use supervisord, hit me up on IRC. 
 Ideally we would also contribute these init scripts back into bifrost code 
base assuming they want them, which I think they would.  Nobody will run 
systemd in a container, and we all have an interest in seeing BiFrost as the 
standard bare metal deployment model inside or outside of containers.

Regards
-steve

The only other option I see would be to not use a container and either install 
biforst on the host or in a vm.

GROAN – one advantage containers provide us is not mucking up the host OS with 
a bajillion dependencies.  I'd like to keep that part of Kolla intact :)

These would essentially be a no op for kolla as we would simply have to 
document how to install bifrost which is covered
Quite well as part of the bifrost project.

Regards
Sean.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-06 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/03/2016 01:47 PM, Truman, Travis wrote:
> Major has made an incredible number of contributions of code and reviews to 
> the OpenStack-Ansible community. Given his role as the primary author of the 
> openstack-ansible-security project, I can think of no better addition to the 
> core reviewer team.

Thanks for all the kind words in the thread! :)

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXLNmtAAoJEHNwUeDBAR+xVI0P/0qPXf+Th0Rwe7Ct170+xHuk
FjxlN5r1wBChmbqQCTLsG519SRiQ0qYheCNJBkNWbLwvJUDIfDiofcY4in8MSFZJ
+Wl2ccT0/E1vXJWZjmktdrrsIt/9r7J6sA6s+JVPXgXvMQLx4q+ZaMHJBMrTZv6c
2T4dgGhIbQFaU9APyu06Y2pEIT4Xh1UzgWZn/ZO6KWPYVrwE+SOD+3k/seWG2IZ/
fKQIFTH+h6Ls6rdyMZpNVZQGYhwHx4yyFpY+yeHUFQs8kzIAUGcJ2zL/GlSh/4nl
f8yXcsuKP6RTJK4rJ+/L11fRb2MX4OefAlcBSm4yM6+VIciekAf7nXzNh3sf2k8E
qyWnDd5S7zgB9L0PtBHBdxPu5nljARyAsj0f6u+JDK5oXuzn+qIIc89vzSZmTA+8
o9nYDIBoV5PYy3XXC0yZXuOFczNq3vKJtYcMmSH+yoyCwRrYSTNK1eFn7WEImUKR
9Pm+w6J0UPBxhg75Uj0TmGcp5IsUcKeOv09zOK0rL+qHIfHGYrmuOAnwWGN0rEB6
kJtiILvB9MMb9Oju/zlcAgp2MXUNpVLStEMr4GMvNmRTBrtoyqBqyTTPfJh8T06M
NSSm5CrsbHprdwmC9uaYDYmiFfqq1c8MQ/7IUUU0zrMQxcsfVkcUDj9SttGsdHNk
IOwFfdsXhFD0ejlCmB/T
=2xsM
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-06 Thread Cathy Zhang
Thank you!

Cathy

-Original Message-
From: Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com] 
Sent: Friday, May 06, 2016 12:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Cathy Zhang
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Sounds good,

   I started by opening a tiny RFE, that may help in the organization of flows 
inside OVS agent, for inter operability of features (SFC, TaaS, ovs fw, and 
even port trunking with just openflow). [1] [2]


[1] https://bugs.launchpad.net/neutron/+bug/1577791
[2] http://paste.openstack.org/show/495967/


On Fri, May 6, 2016 at 12:35 AM, Cathy Zhang  wrote:
> Hi everyone,
>
> We had a discussion on the two topics during the summit. Here is the etherpad 
> link for the discussion.
> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>
> We agreed to continue the discussion on Neutron channel on a weekly basis. It 
> seems UTC 1700 ~ UTC 1800 Tuesday is good for most people.
> Another option is UTC 1700 ~ UTC 1800 Friday.
>
> I will tentatively set the meeting time to UTC 1700 ~ UTC 1800 Tuesday. Hope 
> this time is good for all people who have interest and like to contribute to 
> this work. We plan to start the first meeting on May 17.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 21, 2016 11:43 AM
> To: Cathy Zhang; OpenStack Development Mailing List (not for usage 
> questions); Ihar Hrachyshka; Vikram Choudhary; Sean M. Collins; Haim 
> Daniel; Mathieu Rohon; Shaughnessy, David; Eichberger, German; Henry 
> Fourie; arma...@gmail.com; Miguel Angel Ajo; Reedip; Thierry Carrez
> Cc: Cathy Zhang
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier 
> and OVS Agent extension for Newton cycle
>
> Hi everyone,
>
> We have room 400 at 3:10pm on Thursday available for discussion of the two 
> topics.
> Another option is to use the common room with roundtables in "Salon C" during 
> Monday or Wednesday lunch time.
>
> Room 400 at 3:10pm is a closed room while the Salon C is a big open room 
> which can host 500 people.
>
> I am Ok with either option. Let me know if anyone has a strong preference.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 14, 2016 1:23 PM
> To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
> Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
> Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry 
> Fourie; 'arma...@gmail.com'
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier 
> and OVS Agent extension for Newton cycle
>
> Thanks for everyone's reply!
>
> Here is the summary based on the replies I received:
>
> 1.  We should have a meet-up for these two topics. The "to" list are the 
> people who have interest in these topics.
> I am thinking about around lunch time on Tuesday or Wednesday since some 
> of us will fly back on Friday morning/noon.
> If this time is OK with everyone, I will find a place and let you know 
> where and what time to meet.
>
> 2.  There is a bug opened for the QoS Flow Classifier 
> https://bugs.launchpad.net/neutron/+bug/1527671
> We can either change the bug title and modify the bug details or start 
> with a new one for the common FC which provides info on all 
> requirements needed by all relevant use cases. There is a bug opened 
> for OVS agent extension 
> https://bugs.launchpad.net/neutron/+bug/1517903
>
> 3.  There are some very rough, ugly as Sean put it:-), and preliminary 
> work on common FC https://github.com/openstack/neutron-classifier 
> which we can see how to leverage. There is also a SFC API spec which 
> covers the FC API for SFC usage 
> https://github.com/openstack/networking-sfc/blob/master/doc/source/api
> .rst, the following is the CLI version of the Flow Classifier for your 
> reference:
>
> neutron flow-classifier-create [-h]
> [--description ]
> [--protocol ]
> [--ethertype ]
> [--source-port : protocol port>]
> [--destination-port : destination protocol port>]
> [--source-ip-prefix ]
> [--destination-ip-prefix ]
> [--logical-source-port ]
> [--logical-destination-port ]
> [--l7-parameters ] FLOW-CLASSIFIER-NAME
>
> The corresponding code is here 
> https://github.com/openstack/networking-sfc/tree/master/networking_sfc
> /extensions
>
> 4.  We should come up with a formal Neutron spec for FC and another 
> one for OVS Agent extension and get everyone's review and approval. 
> Here is the etherpad catching our previous requirement discussion on 
> OVS agent (Thanks David for the link! I remember we had this 
> discussion before) 
> https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
>
>
> More inline.
>
> Thanks,
> Cathy
>
>
> -Original 

Re: [openstack-dev] [kolla] xenial or trusty

2016-05-06 Thread Steven Dake (stdake)


On 5/6/16, 8:32 AM, "Emilien Macchi"  wrote:

>On Fri, May 6, 2016 at 9:09 AM, Jesse Pretorius
> wrote:
>> On 4 May 2016 at 19:21, Emilien Macchi  wrote:
>>>
>>> On Wed, May 4, 2016 at 1:52 PM, Jeffrey Zhang 
>>> wrote:
>>> > I'd like to lock the tag version in certain branch. One branch only
>>> > support
>>> > one
>>> > distro release.
>>> >
>>> > For example, the mitaka branch only build on Trusty and the
>>> > master/newton
>>> > branch
>>> > only build on Xenial.
>>> >
>>> > So, the branch and OS matrix should like ( fix me and the ?)
>>> >
>>> >   Ubuntu CentOS Debian  OracleLinux
>>> > Liberty14.047  ? ?
>>> > Mitaka 14.047  ? ?
>>> > Master 16.047  ? ?
>>>
>>> FWIW, this is what we plan to do in Puppet OpenStack CI (except we
>>> don't gate on OracleLinux & Debian).
>>
>>
>> FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
>> 14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles,
>>with
>> the current proposal to drop it in P. The intent is to provide our
>>deployers
>> the opportunity to transition with a mixed deployment.
>
>AFIK Newton can only be deployed on Xenial, there won't be support on
>Trusty (iirc my conversation with UCA folks).
>So I'm curious how you're going to do. Are you building your own packages?

OK well this changes things.  I hadn't heard of this.  In this case, I
think we need to implement whatever is rationale, which is probably trusty
and xenial side by side.

Do you have a reference to the discussion about Xenial?

Regards
-steve

>
>Puppet OpenStack CI is using upstream packaging provided by Ubuntu (UCA).
>
>> Obviously YMMV and our plans may change based on the actual
>>implementation
>> details.
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>-- 
>Emilien Macchi
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-06 Thread Ihor Dvoretskyi
+1 from me.
On May 6, 2016 9:12 AM, "Jimmy McCrory"  wrote:

> +1
>
> On Fri, May 6, 2016 at 7:52 AM, Dave Wilde 
> wrote:
>
>> I second that emotion +1
>>
>>
>>
>>
>> From: Truman, Travis 
>> 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>>  
>> Date: May 3, 2016 at 13:49:04
>> To: OpenStack Development Mailing List (not for usage questions)
>>  
>> Subject:  [openstack-dev] [openstack-ansible] Nominate Major Hayden for
>> core in openstack-ansible-security
>>
>> Major has made an incredible number of contributions of code and reviews
>> to the OpenStack-Ansible community. Given his role as the primary author of
>> the openstack-ansible-security project, I can think of no better addition
>> to the core reviewer team.
>>
>> Travis Truman
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kubernetes] How to join the core team of kolla-kubernetess

2016-05-06 Thread Steven Dake (stdake)
Hey folks,

A lot of folks have signed up as developers for kolla-kubernetes in the spec, 
but one thing that hasn't been discussed in much detail is how to join the core 
reviewer team.

To do that, simply provide one (or preferrably a bunch) of reviews on the 
kolla-kubernetes spec.  This will let the kolla core team know that you have 
interest in joining the kolla-kubernetess-core team.  The extra responsibility 
this team has is reviewing code going into the kolla-kubernetes repository.

The specification for review is here:
https://review.openstack.org/#/c/304182/

I'd like to see this work (spec development) finish in the next two weeks or 
less so the clock is ticking :)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-06 Thread Jeremy Stanley
On 2016-05-06 10:25:41 -0400 (-0400), Doug Hellmann wrote:
> Excerpts from Tony Breeds's message of 2016-05-06 09:53:11 +1000:
[...]
> > I think some of these pro-active things will be key.  A quick
> > check shows we have nearly 30 items in g-r that don't seem to be
> > used by anything.  So there is some low hanging fruit there.
> > Search for overlapping requirements and then working with the
> > impacted teams is a big job but again a very worthwhile goal.
> 
> Someone had a tool that looked at second-order dependencies, I
> think. I can't find the reference in my notes, but maybe someone
> else has it handy?
[...]

I'm not sure entirely what you're looking for. I added
http://git.openstack.org/cgit/openstack/requirements/tree/tools/cruft.sh
to try to find things in global requirements which no project
declares as a requirement. Richard Jones wrote
https://pypi.python.org/pypi/pip_missing_reqs to spot modules
projects are importing directly while failing to declare a
requirement on them (a fragile situation if one of your other
dependencies adjusts its own dependencies and drops whatever
provided that module). We don't usually list transitive-only
dependencies in global requirements unless we need to pin/skip
versions of them in some way, and instead rely on the constraints
list to represent the complete transitive dependency set.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Mooney, Sean K
Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit ...) without a running init system which 
is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
"bifros-postinstall".

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.   I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress  please let me know but 
currently I am leaning towards option 2.

The only other option I see would be to not use a container and either install 
biforst on the host or in a vm.
These would essentially be a no op for kolla as we would simply have to 
document how to install bifrost which is covered
Quite well as part of the bifrost project.

Regards
Sean.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Austin Design Summit Recap

2016-05-06 Thread Sean McGinnis
At the Design Summit in Austin, the Cinder team met over three days
to go over a variety of topics. This is a general summary of the
notes captured from each session.

We were also able to record most sessions. Please see the
openstack-cinder YouTube channel for all its minute and tedious
glory:

https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ

Replication Next Steps
==
Replication v2.1 was added in Mitaka. This was a first step in supporting
a simplified use case. A few drivers were able to implement support for
this in Mitaka, with a few already in the queue for support in Newton.

There is a desire to add the ability to replicate smaller groups of volumes
and control them individually for failover, failback, etc. Eventually we
would also want to expose this functionality to non-admin users. This will
allow tenants to group their volumes by application workload or other user
specific constraint and give them control over managing that workload.

It was agreed that it is too soon to expose this at this point. We would
first like to get broader vendor support for the current replication
capabilities before we add anything more. We also want to improve the admin
experience with handling full site failover. As it is today, there is a lot
of manual work that the admin would need to do to be able to fully recover
from a failover. There are ways we can make this experience better. So before
we add additional things on top of replication, we want to make sure what
we have is solid and at least slightly polished.

Personally, I would like to see some work done with Nova or some third party
entity like Smaug or other projects to be able to coordinate activities on
the compute and storage sides in order to fail over an environment completely
from a primary to secondary location.

Related to the group replication (tiramisu) work was the idea of generic
volume groups. Some sort of grouping mechanism would be required to tie in
to that. We have a grouping today with consistency groups, but that has its
own set of semantics and expectations that doesn't always fully mesh with
what users would want for group replication.

There have also been others looking at using consistency groups to enable
vendor specific functionality not quite inline with the intent of what
CGs are meant for.

We plan on creating a new concept of a group that has a set of possible types.
One of these types will be consistency, with the goal that internally we can
shift things around to convert our current CG concept to be a group of type
consistency while still keeping the API interface that users are used to for
working with them.

But beyond that we will be able to add things like a "replication" type that
will allow users to group volumes, that may or not be able to be snapped in
a IO order consistent manner, but that can be acted on as a group to be
replicated. We can also expand this group type to other concepts moving
forward to meet other use cases without needing to introduce a wholly new
concept. The mechanisms for managing groups will already be in place and a new
type will be able to be added using existing plumbing.

Etherpad:
https://etherpad.openstack.org/p/cinder-newton-replication

Active/Active High Availability
===
Work continues on HA. Gorka gave an overview of the work completed so far and
the work left to do. We are still on the plan proposed at the Tokyo Summit,
just a lot of work to get it all implemented. The biggest variations are around
the host name used for the "clustered" service nodes and the idea that we will
not attempt to do any sort of automatic cleanup for in-progress work that gets
orphaned due to a node failure.

Etherpad:
https://etherpad.openstack.org/p/cinder-newton-activeactiveha

Mitaka Recap

Two sessions were devoted to going over what had changed in Mitaka. There were
a lot of things introduced that developers and code reviewers now need to be
aware of, so we wanted to spend some time educating everyone on these things.

Conditional DB Updates
--
To try to eliminate races (partly related to the HA work) we will now use
conditional updates. This will eliminate the gap between checking a value in
setting it, making it one atomic DB update. Better performance than locking
around operations.

Microversions
-
API microversions was implemented in Mitaka. The new /v3 endpoint should be
used. Any change in the API should now be implemented as a micrversion bump.
Devref in Cinder with details of how to use this and more detail as to when
a microversion is needed and when it is not.

Rolling Upgrades

Devref added for rolling upgrades and versioned objects. Discussed need to make
incremental DB changes rather than all in one release. First release add new
colume - write to both, read from original. Second release - write to both,
read from new column. Third release - original column can now be 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-06 Thread Jesse Pretorius
On 6 May 2016 at 15:25, Doug Hellmann  wrote:

>
> Someone had a tool that looked at second-order dependencies, I think. I
> can't find the reference in my notes, but maybe someone else has it
> handy?
>

I don't have a specific tool handy, but I'm guessing that we could cludge
something together through the implementation of tooling that effectively
builds venvs for each project and grabs a pip freeze from each of them.
That could at least give us a set of data to examine and work from?

We have something that kinda does this [2] although the purpose is quite
different. I would guess that we could either work out a way to make use of
this to achieve the goal through an automated process, or we could just
derive something useful from it. If this is deemed the best or only option
then I'd be happy to take this up.

If there's a better way then I'm all for it, but from what I see the pip
project has a long standing [1] issue for a resolver.

[1] https://github.com/pypa/pip/issues/988
[2] https://github.com/openstack/openstack-ansible-repo_build
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Developer Mailing List Digest April 9-22

2016-05-06 Thread Mike Perez
HTML rendered: 
http://www.openstack.org/blog/2016/05/openstack-developer-mailing-list-digest-20160422/

Success Bot Says

* Clarkb: infra team redeployed Gerrit on a new larger server. Should serve
  reviews with fewer 500 errors.
* danpb: woooh, finally booted a real VM using nova + os-vif + openvswitch
  + privsep 
* neiljerram: Neutron routed networks spec was merged today; great job Carl
  + everyone else who contributed!
* Sigmavirus24: Hacking 0.11.0 is the first release of the project in over
  a year.
* Stevemar: dtroyer just released openstackclient 2.4.0 - now with more network
  commands \o/
* odyssey4me: OpenStack-Ansible Mitaka 13.0.1 has been released!
* All: https://wiki.openstack.org/wiki/Successes


One Platform – Containers/Bare Metal?
=
* From the unofficial board meeting [1], an interest topic came up of how to
  truly support containers and bare metal under a common API with virtual
  machines.
* We want to underscore how OpenStack has an advantage by being able to provide
  both virtual machines and bare metal as two different resources,  when the
  “but the cloud should sentiment arises.
* The discussion around “supporting containers” was different and was not about
  Nova providing them.
  - Instead work with communities on making OpenStack the best place to run
things like Kubernetes and Docker swarm.
* We want to be supportive of bare metal and containers, but the way we want to
  be supportive is different for 
* In the past, a common compute API was contemplated for Magnum, however, it
  was understood that the API would result in the lowest common denominator of
  all compute types, and exceedingly complex interface.
  - Projects like Trove that want to offer these compute choices without adding
complexity within their own project can utilize solutions with Nova in
deploying virtual machines, bare metal and containers (libvirt-lxc).
* Magnum will be having a summit session [2] to discuss if it makes sense to
  build a common abstraction layer for Kubernetes, Docker swarm and Mesos.
* There are expressed opinions that both native APIs and LCD APIs can co-exist.
  - Trove being an example of a service that doesn't need everything a native
API would give.
  - Migrate the workload from VM to container.
  - Support hybrid deployment (VMs & containers) of their application.
  - Bring containers (in Magnum bays) to a Heat template, and enable
connections between containers and other OpenStack resources
  - Support containers to Horizon
  - Send container metrics to Ceilometer
  - Portable experience across container solutions.
  - Some people just want a container and don't want the complexities of
others (COEs, bays, baymodels, etc.)
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/thread.html#91947


Delimiter, the Quota Management Library Proposal

* At this point, there is a fair amount of objections to developing a service
  to manage quotas for all services. We will be discussing the development of
  a library that services will use to manage their own quotas with.
* You don't need a serializable isolation level. Just use a compare-and-update
  with retries strategy. This will prevent even multiple writers from
  oversubscribing any resource with an isolation level.
  - The “generation” field in the inventories table is what allows multiple
writer to ensure a consistent view of the data without needing to rely on
heavy lock-based semantics in relational database management systems.
* Reservation doesn't belong in quota library.
  - Reservations is concept of a time to claim of some resource.
  - Quota checking is returning whether a system right now can handle a request
right now to claim a set of resources.
* Key aspects of the Delimiter Library:
  - It's a library, not a service.
  - Impose limits on resource consumptions.
  - Will not be responsible for rate limiting.
  - Will not maintain data for resources. Projects will take care of
keeping/maintaining data for the resources and resource consumption.
  - Will not have a concept of reservations.
  - Will fetch project quota from respective projects.
  - Will take into consideration of a project being flat or nested.
* Delimiter will rely on the concept of generation-id to guarantee sequencing.
  Generation-id gives a point in time view of resource usage in a project.
  Project consuming delimiter will need to provide this information while
  checking or consuming quota. At present Nova [3] has the concept of
  generation-id.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/thread.html#92496


Newton Release Management Communication
===
* Volunteers filling PTL and liaison positions are responsible for ensuring
  communication between project teams happen smoothly.
* Email, for 

Re: [openstack-dev] [kolla] xenial or trusty

2016-05-06 Thread Jesse Pretorius
On 6 May 2016 at 16:27, Jeffrey Zhang  wrote:

>
> On Fri, May 6, 2016 at 9:09 PM, Jesse Pretorius  > wrote:
>
>> FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
>> 14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles, with
>> the current proposal to drop it in P. The intent is to provide our
>> deployers the opportunity to transition with a mixed deployment.
>
>
> Are you meaning the host/baremetal OS? ​the openstack-ansible deploy the
> OpenStack in LXC.​
> So it really do not care about the host machine's OS. Kolla is not care
> about it, too.
> I think the openstack-ansible a specify LXC image, and do not support
> multi base image.
>
> if not, could u provide any prove for this?
>

OSA supports the implementation of OpenStack on bare metal or in LXC
machine containers, so we need to cater for both. When an LXC machine
container is deployed we've chosen to use the strategy of always
implementing the same OS in the container as is implemented on the host.
This simplifies our testing greatly.

For the sake of background information, seeing as you asked, the base LXC
image we're using comes from https://images.linuxcontainers.org/ giving us
the ability to support multiple versions, multiple distributions and
multiple architectures, and it's especially nifty that the entire image
build process is open source and therefore can be implemented and
customised by our deployers.

I guess this is similar for Kolla in a different way because the image
pipeline is defined by the project and implemented through the docker image
building processes.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][infra] HW upgrade for tripleo CI

2016-05-06 Thread James Slagle
On Fri, May 6, 2016 at 10:36 AM, Derek Higgins  wrote:
> Hi All,
>the long awaited RAM and SSD's have arrived for the tripleo rack,
> I'd like to schedule a time next week to do the install which will
> involve and outage window. We could attempt to do it node by node but
> the controller needs to come down at some stage anyways and doing
> other nodes groups at a time will take all day as we would have to
> wait for jobs to finish on each one as we go along.
>
>I'm suggesting we do it on one of Monday(maybe a little soon at
> this stage), Wednesday or Friday (mainly because those best suit me),
> has anybody any suggestions why one day would be better over the
> others?

+1 to going ahead with the upgrades now. I don't have much preference
for either day...since you're going to be the one doing most of the
work, I'd say pick the day that works best for you. I can plan on
being available to help out if you need to hand anything over. Should
we coordinate via an etherpad or anything?




-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] OpenStack UX core nomination

2016-05-06 Thread David Lyle
+1

On Thu, May 5, 2016 at 2:11 PM, Kruithof Jr, Pieter
 wrote:
> I would like to nominate Shamail Tahir as a core for the OpenStack UX project.
>
> Shamail has been central to developing a set of personas for the overall 
> community and providing his significant expertise with customers.  In some 
> ways, he has also been our focal to the other community projects.
>
> His nomination supports the goal of OpenStack UX to support cross-project 
> initiatives.
>
> Piet
>
> PTL, OpenStack UX project
>
> Piet Kruithof
>
> Sr User Experience Architect,
> Intel Open Source Technology Group
>
> Project Technical Lead (PTL)
> OpenStack UX project
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-06 Thread Jimmy McCrory
+1

On Fri, May 6, 2016 at 7:52 AM, Dave Wilde 
wrote:

> I second that emotion +1
>
>
>
>
> From: Truman, Travis 
> 
> Reply: OpenStack Development Mailing List (not for usage questions)
>  
> Date: May 3, 2016 at 13:49:04
> To: OpenStack Development Mailing List (not for usage questions)
>  
> Subject:  [openstack-dev] [openstack-ansible] Nominate Major Hayden for
> core in openstack-ansible-security
>
> Major has made an incredible number of contributions of code and reviews
> to the OpenStack-Ansible community. Given his role as the primary author of
> the openstack-ansible-security project, I can think of no better addition
> to the core reviewer team.
>
> Travis Truman
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up, Doc? 6 May 2016

2016-05-06 Thread Ildikó Váncsa
Hi Lana,

Thanks for the summary, it's pretty good reading to catch up what happened 
recently.

I have one question, I might missed a few entries, so please point me to the 
right document in this case. We had a docco session with the Telemetry team and 
we agreed on moving back the documentation snippets, like for instance the 
Install Guide, to the project trees is a really good step and we're very 
supportive. In this sense I would like to ask about the plans regarding the 
Admin guide. We have a chapter there, which is on one hand outdated and on the 
other hand would be better to move under the project trees as well. Is this 
plan/desire in line with your plans regarding that document?

Thanks,
/Ildikó

> -Original Message-
> From: Lana Brindley [mailto:openst...@lanabrindley.com]
> Sent: May 06, 2016 08:13
> To: enstack.org; OpenStack Development Mailing List; 
> openstack-i...@lists.openstack.org
> Subject: What's Up, Doc? 6 May 2016
> 
> Hi everyone,
> 
> I hope you all had a safe journey home from Summit, and are now fully 
> recovered from all the excitement (and jetlag)! I'm really
> pleased with the amount of progress we made this time around. We have a 
> definitive set of goals for Newton, and I'm confident that
> they're all moving us towards a much better docs suite overall. Of course, 
> the biggest and most important work we have to do is to get
> our Install Guide changes underway. I'm very excited to see the new method 
> for documenting OpenStack installation, and can't wait
> to see all our big tent projects contributing to docs in such a meaningful 
> way. Thank you to everyone (in the room and online) who
> contributed to the Install Guide discussion, and helped us move forward on 
> this important project.
> 
> In other news, I've written a wrapup of the Austin design summit on my blog, 
> which you might be interested in:
> http://lanabrindley.com/2016/05/05/openstack-newton-summit-docs-wrapup/
> 
> == Progress towards Newton ==
> 
> 152 days to go!
> 
> Bugs closed so far: 61
> 
> Because we have such a specific set of deliverables carved out for Newton, 
> I've made them their own wiki page:
> https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
> Feel free to add more detail and cross things off as they are achieved 
> throughout the release. I will also do my best to ensure it's kept
> up to date for each newsletter.
> 
> One of the first tasks we've started work on after Summit is moving the Ops 
> and HA Guides out of their own repositories and into
> openstack-manuals. As a result, those repositories are now frozen, and any 
> work you want to do on those books should be in
> openstack-manuals.
> 
> We are almost ready to publish the new RST version of the Ops Guide, there's 
> just a few cleanup edits going in now, so make sure you
> have the right book, in the right repo from now on. This was our very last 
> book remaining in DocBook XML, so the docs toolchain will
> be removing DocBook XML support. See spec https://review.openstack.org/311698 
> for details.
> 
> Another migration note is that the API reference content is moving from 
> api-site to project specific repositories and api-site is now
> frozen. For more detail, see Anne's email: 
> http://lists.openstack.org/pipermail/openstack-docs/2016-May/008536.html
> 
> == Mitaka wrapup ==
> 
> We performed a Mitaka retrospective at Summit, notes are here: 
> https://etherpad.openstack.org/p/austin-docs-mitakaretro
> 
> In particular, I'd like to call out our hard working tools team Andreas and 
> Christian, all our Speciality Team leads, and the Mitaka release
> managers Brian and Olga. Well done on a very successful release, everyone :)
> 
> Total bugs closed: 645
> 
> == Site Stats ==
> 
> Thanks to the lovely people at Foundation (thanks Allison!) I now have access 
> to more stats than I could possibly guess what to do
> with, and I'm hoping to be able to share some of these with you through the 
> newsletter. If there's something in particular you would
> like to see, then please let me know and I'll endeavour to record it here!
> 
> So far I can tell you that docs.openstack.org had 1.63M unique pageviews in 
> April, down slightly from 1.72M in March, and the average
> session duration is just over six minutes, looking at just under 4 pages per 
> session.
> 
> == Doc team meeting ==
> 
> Next meetings:
> 
> We'll be restarting the meeting series next week.
> 
> Next meetings:
> US: Wednesday 11 April, 19:00 UTC
> APAC: Wednesday 18 April, 00:30 UTC
> 
> Please go ahead and add any agenda items to the meeting page here:
> https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting
> 
> --
> 
> Keep on doc'ing!
> 
> Lana
> 
> https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#6_May_2016
> 
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com

__

Re: [openstack-dev] [TripleO][infra] HW upgrade for tripleo CI

2016-05-06 Thread Jason Guiditta

On 06/05/16 09:57 -0500, Ben Nemec wrote:

\o/

On 05/06/2016 09:36 AM, Derek Higgins wrote:

Hi All,
   the long awaited RAM and SSD's have arrived for the tripleo rack,
I'd like to schedule a time next week to do the install which will
involve and outage window. We could attempt to do it node by node but
the controller needs to come down at some stage anyways and doing
other nodes groups at a time will take all day as we would have to
wait for jobs to finish on each one as we go along.

   I'm suggesting we do it on one of Monday(maybe a little soon at
this stage), Wednesday or Friday (mainly because those best suit me),
has anybody any suggestions why one day would be better over the
others?


I would probably suggest Monday or Friday, since it usually seems like
CI is the quietest at the ends of the week.

From building rpms for releases, I would suggest you consider not

Friday.  If something is going to go wrong, it is 10 times more likely
to happen on a Friday.  Just a word of caution.

-j



   The other option is that we do nothing until the Rack is moved
later in the summer but the exact timing of this is now up in the air
a little so I think its best we just bite the bullet and do this ASAP
without waiting.


Yeah, that's still a couple of months off, and it seems silly to have
all that hardware sitting around unused for so long.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] xenial or trusty

2016-05-06 Thread Ian Cordasco
 

-Original Message-
From: Emilien Macchi 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 6, 2016 at 10:34:29
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [kolla] xenial or trusty

> On Fri, May 6, 2016 at 9:09 AM, Jesse Pretorius
> wrote:
> > On 4 May 2016 at 19:21, Emilien Macchi wrote:
> >>
> >> On Wed, May 4, 2016 at 1:52 PM, Jeffrey Zhang  
> >> wrote:
> >> > I'd like to lock the tag version in certain branch. One branch only
> >> > support
> >> > one
> >> > distro release.
> >> >
> >> > For example, the mitaka branch only build on Trusty and the
> >> > master/newton
> >> > branch
> >> > only build on Xenial.
> >> >
> >> > So, the branch and OS matrix should like ( fix me and the ?)
> >> >
> >> > Ubuntu CentOS Debian OracleLinux
> >> > Liberty 14.04 7 ? ?
> >> > Mitaka 14.04 7 ? ?
> >> > Master 16.04 7 ? ?
> >>
> >> FWIW, this is what we plan to do in Puppet OpenStack CI (except we
> >> don't gate on OracleLinux & Debian).
> >
> >
> > FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
> > 14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles, with
> > the current proposal to drop it in P. The intent is to provide our deployers
> > the opportunity to transition with a mixed deployment.
>  
> AFIK Newton can only be deployed on Xenial, there won't be support on
> Trusty (iirc my conversation with UCA folks).
> So I'm curious how you're going to do. Are you building your own packages?
>  
> Puppet OpenStack CI is using upstream packaging provided by Ubuntu (UCA).

OSA installs from source for OpenStack services and all Python dependencies. 
This is how it can support both simultaneously.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] xenial or trusty

2016-05-06 Thread Emilien Macchi
On Fri, May 6, 2016 at 9:09 AM, Jesse Pretorius
 wrote:
> On 4 May 2016 at 19:21, Emilien Macchi  wrote:
>>
>> On Wed, May 4, 2016 at 1:52 PM, Jeffrey Zhang 
>> wrote:
>> > I'd like to lock the tag version in certain branch. One branch only
>> > support
>> > one
>> > distro release.
>> >
>> > For example, the mitaka branch only build on Trusty and the
>> > master/newton
>> > branch
>> > only build on Xenial.
>> >
>> > So, the branch and OS matrix should like ( fix me and the ?)
>> >
>> >   Ubuntu CentOS Debian  OracleLinux
>> > Liberty14.047  ? ?
>> > Mitaka 14.047  ? ?
>> > Master 16.047  ? ?
>>
>> FWIW, this is what we plan to do in Puppet OpenStack CI (except we
>> don't gate on OracleLinux & Debian).
>
>
> FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
> 14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles, with
> the current proposal to drop it in P. The intent is to provide our deployers
> the opportunity to transition with a mixed deployment.

AFIK Newton can only be deployed on Xenial, there won't be support on
Trusty (iirc my conversation with UCA folks).
So I'm curious how you're going to do. Are you building your own packages?

Puppet OpenStack CI is using upstream packaging provided by Ubuntu (UCA).

> Obviously YMMV and our plans may change based on the actual implementation
> details.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] xenial or trusty

2016-05-06 Thread Jeffrey Zhang
On Fri, May 6, 2016 at 9:09 PM, Jesse Pretorius 
wrote:

> FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
> 14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles, with
> the current proposal to drop it in P. The intent is to provide our
> deployers the opportunity to transition with a mixed deployment.


Are you meaning the host/baremetal OS? ​the openstack-ansible deploy the
OpenStack in LXC.​
So it really do not care about the host machine's OS. Kolla is not care
about it, too.
I think the openstack-ansible a specify LXC image, and do not support multi
base image.

if not, could u provide any prove for this?

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Next Scheduler sub-team meeting

2016-05-06 Thread Edward Leafe
The next meeting for the Scheduler sub-team will be on Monday, May 9 at 1400 
UTC (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160509T14)

The agenda for the meeting is here; please add any items that you wish to 
discuss: 
https://wiki.openstack.org/wiki/Meetings/NovaScheduler#Agenda_for_next_meeting


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][infra] HW upgrade for tripleo CI

2016-05-06 Thread Ben Nemec
\o/

On 05/06/2016 09:36 AM, Derek Higgins wrote:
> Hi All,
>the long awaited RAM and SSD's have arrived for the tripleo rack,
> I'd like to schedule a time next week to do the install which will
> involve and outage window. We could attempt to do it node by node but
> the controller needs to come down at some stage anyways and doing
> other nodes groups at a time will take all day as we would have to
> wait for jobs to finish on each one as we go along.
> 
>I'm suggesting we do it on one of Monday(maybe a little soon at
> this stage), Wednesday or Friday (mainly because those best suit me),
> has anybody any suggestions why one day would be better over the
> others?

I would probably suggest Monday or Friday, since it usually seems like
CI is the quietest at the ends of the week.

> 
>The other option is that we do nothing until the Rack is moved
> later in the summer but the exact timing of this is now up in the air
> a little so I think its best we just bite the bullet and do this ASAP
> without waiting.

Yeah, that's still a couple of months off, and it seems silly to have
all that hardware sitting around unused for so long.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-06 Thread Dave Wilde
I second that emotion +1



From: Truman, Travis 

Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 3, 2016 at 13:49:04
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in 
openstack-ansible-security

Major has made an incredible number of contributions of code and reviews to the 
OpenStack-Ansible community. Given his role as the primary author of the 
openstack-ansible-security project, I can think of no better addition to the 
core reviewer team.

Travis Truman


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][infra] HW upgrade for tripleo CI

2016-05-06 Thread Derek Higgins
Hi All,
   the long awaited RAM and SSD's have arrived for the tripleo rack,
I'd like to schedule a time next week to do the install which will
involve and outage window. We could attempt to do it node by node but
the controller needs to come down at some stage anyways and doing
other nodes groups at a time will take all day as we would have to
wait for jobs to finish on each one as we go along.

   I'm suggesting we do it on one of Monday(maybe a little soon at
this stage), Wednesday or Friday (mainly because those best suit me),
has anybody any suggestions why one day would be better over the
others?

   The other option is that we do nothing until the Rack is moved
later in the summer but the exact timing of this is now up in the air
a little so I think its best we just bite the bullet and do this ASAP
without waiting.

thanks,
Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-06 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2016-05-06 09:53:11 +1000:
> On Thu, May 05, 2016 at 02:35:46PM -0400, Doug Hellmann wrote:
> 
> > This has been a lively thread, and the summit session was similarly
> > animated. I'm glad to see so much interest in managing our dependencies!
> > 
> > As we discussed at the summit, my primary objective with dependency
> > management this cycle is actually to spin it out into its own team, like
> > we did with stable management over the last year. We discussed several
> > things that team might undertake, including reviewing all of our
> > existing dependencies to ensure they are all still actually needed;
> > reviewing any overlap between dependencies to try to remove items from
> > the list; and implementing some of the other changes we discussed such
> > as allowing overlapping ranges between the global and per-project lists.
> 
> I think some of these pro-active things will be key.  A quick check shows we
> have nearly 30 items in g-r that don't seem to be used by anything.  So there
> is some low hanging fruit there.  Search for overlapping requirements and then
> working with the impacted teams is a big job but again a very worthwhile goal.

Someone had a tool that looked at second-order dependencies, I think. I
can't find the reference in my notes, but maybe someone else has it
handy?

> 
> > We had no volunteers to serve as PTL of that new team,
> 
> I was only have joking when I said I can learn how to be a PTL in 2 things at
> the same time :)

Yeah, I wasn't going to do that to you based on an off-hand comment. If
you want it when we decide we're ready for the team to form, that's
entirely up to you and the electorate. :-)

> 
> Regardless lets try to grow a team from the volunteers then we can decide what
> the spin off looks like.

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Dragonflow] - IRC Meeting time

2016-05-06 Thread Gal Sagie
Hello All,

During the summit we received feedback that many people from the US time
zone
would like to attend our IRC meetings.

Our current time is 0900 UTC which is good for Europe/Israel/China where
most
of our contributors come from.

If you are in the US and would like to participate in our IRC meeting,
please propose
a time on Monday that best fit you (please keep in mind it should be as
early as possible
for us all to be able to attend as well)

If we see there is a good enough interest we will start doing alternating
IRC meetings.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Dragonflow] - No IRC Meeting (5/9)

2016-05-06 Thread Gal Sagie
Hello Everyone,

We will not have an IRC meeting in the upcoming Monday (5/9), we will
continue doing IRC
meeting in our regular time the week after.

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] better solution for the non-ini format configure file

2016-05-06 Thread Steven Dake (stdake)


On 5/5/16, 4:52 AM, "Paul Bourke"  wrote:

>TL;DR keep globals.yml to a minimum, customise configs via
>host_vars/group_vars
>
>It seems right now the "best" approach may be to tokenise variables as
>required. This is the approach we currently use in Oracle. There are two
>other approaches I can think of available to us:
>
>1) The overwrite mechanism as Jeffrey mentioned
>2) Make the merge_configs script modular so it can handle more formats
>than just ini
>
>The problem with 1) is that it is heavy weight for the Operator who
>wants to just customise one or two variables such as WEBROOT. Now they
>have to copy and maintain the entire config file.
>
>The problem with 2) is that I feel it's a burden on us to write and
>maintain code that can merge many different file formats. It could be
>done, though may potentially be outside the scope of the project.

I'm a fan of this option.  It is not outside the scope of our project to
maintain fundamental building blocks to execute various parts of our
software stack (in this case reconfigure).

Regards
-steve
>
>The tokenisation approach is unfortunately against what is described in
>"Kolla¹s Deployment Philosophy"[0] though the reality may be that this
>approach is the most Operator friendly. In regards to concern of
>globals.yml growing unmanageable, I feel globals.yml is overused and
>should only store the bare minimum. Service specific variables should be
>kept within their own role files (e.g. ansible/roles/horizon/defaults),
>and then documented which are available for tweaking via top level
>host_vars/group_vars. This is standard practice in other Ansible roles
>I've come across.
>
>-Paul
>
>[0] http://docs.openstack.org/developer/kolla/deployment-philosophy.html
>
>On 04/05/16 16:28, Mauricio Lima wrote:
>> I agree with your approach Jeffrey, although this is not ideal, this is
>> an approach already used in kolla.
>>
>> 2016-05-04 12:01 GMT-03:00 Jeffrey Zhang > >:
>>
>> Recently, Jack Ning pushed a PS[0], which export the `WEBROOT` to
>> the globals.yml file.
>> Because there is no chance to change the horizon/apache configure
>> file now.
>>
>> The root cause is that: Kolla do not support non-ini format
>> configure file. for the
>> ini-format file, we use a merge_config module[1] to merge all the
>> found file. But it
>> will be not work for configure file for apache, rabbitmq and so on.
>>
>> I would like to the current merge_config implementation. It is
>> directly and easy to use.
>> Not like the puppet, we have to remember the variable name defined
>> in the module. we have
>> no chance to add some user-defined variable.
>>
>> Export the variable to global is very bad and ugly. It will became a
>> disaster when more
>> and more variables is exported.
>>
>> So we should catch up a better solution to handle the configure
>>file.
>>
>> One solution I have is use overwrite mechanism. for example when
>> there is a file in
>> /etc/kolla/config/apache.conf, it will overwrite the templates in
>> the roles. But this
>> is still not ideal.
>>
>> Any body has better solution?
>>
>> [0] https://review.openstack.org/306928
>> [1]
>> 
>>http://git.openstack.org/cgit/openstack/kolla/tree/ansible/action_plugins
>>/merge_configs.py
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me 
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-06 Thread Paul Belanger
On Tue, May 03, 2016 at 05:34:55PM +0100, Steven Hardy wrote:
> Hi all,
> 
> Some folks have requested a summary of our summit sessions, as has been
> provided for some other projects.
> 
> I'll probably go into more detail on some of these topics either via
> subsequent more focussed threads an/or some blog posts but what follows is
> an overview of our summit sessions[1] with notable actions or decisions
> highlighted.  I'm including some of my own thoughts and conclusions, folks
> are welcome/encouraged to follow up with their own clarifications or
> different perspectives :)
> 
> TripleO had a total of 5 sessions in Austin I'll cover them one-by-one:
> 
> -
> Upgrades - current status and roadmap
> -
> 
> In this session we discussed the current state of upgrades - initial
> support for full major version upgrades has been implemented, but the
> implementation is monolithic, highly coupled to pacemaker, and inflexible
> with regard to third-party extraconfig changes.
> 
> The main outcomes were that we will add support for more granular
> definition of the upgrade lifecycle to the new composable services format,
> and that we will explore moving towards the proposed lightweight HA
> architecture to reduce the need for so much pacemaker specific logic.
> 
> We also agreed that investigating use of mistral to drive upgrade workflows
> was a good idea - currently we have a mixture of scripts combined with Heat
> to drive the upgrade process, and some refactoring into discrete mistral
> workflows may provide a more maintainable solution.  Potential for using
> the existing SoftwareDeployment approach directly via mistral (outside of
> the heat templates) was also discussed as something to be further
> investigated and prototyped.
> 
> We also touched on the CI implications of upgrades - we've got an upgrades
> job now, but we need to ensure coverage of full release-to-release upgrades
> (not just commit to commit).
> 
> ---
> Containerization status/roadmap
> ---
> 
> In this session we discussed the current status of containers in TripleO
> (which is to say, the container based compute node which deploys containers
> via Heat onto an an Atomic host node that is also deployed via Heat), and
> what strategy is most appropriate to achieve a fully containerized TripleO
> deployment.
> 
> Several folks from Kolla participated in the session, and there was
> significant focus on where work may happen such that further collaboration
> between communities is possible.  To some extent this discussion on where
> (as opposed to how) proved a distraction and prevented much discussion on
> supportable architectural implementation for TripleO, thus what follows is
> mostly my perspective on the issues that exist:
> 
> Significant uncertainty exists wrt integration between Kolla and TripleO -
> there's largely consensus that we want to consume the container images
> defined by the Kolla community, but much less agreement that we can
> feasably switch to the ansible-orchestrated deployment/config flow
> supported by Kolla without breaking many of our primary operator interfaces
> in a fundamentally unacceptable way, for example:
> 
> - The Mistral based API is being implemented on the expectation that the
>   primary interface to TripleO deployments is a parameters schema exposed
>   by a series of Heat templates - this is no longer true in a "split stack"
>   model where we have to hand off to an alternate service orchestration tool.
> 
> - The tripleo-ui (based on the Mistral based API) consumes heat parameter
>   schema to build it's UI, and Ansible doesn't support the necessary
>   parameter schema definition (such as types and descriptions) to enable
>   this pattern to be replicated.  Ansible also doesn't provide a HTTP API,
>   so we'd still have to maintain and API surface for the (non python) UI to
>   consume.
> 
> We also discussed ideas around integration with kubernetes (a hot topic on
> the Kolla track this summit), but again this proved inconclusive beyond
> that yes someone should try developing a PoC to stimulate further
> discussion.  Again, significant challenges exist:
> 
> - We still need to maintain the Heat parameter interfaces for the API/UI,
>   and there is also a strong preference to maintain puppet as a tool for
>   generating service configuration (so that existing operator integrations
>   via puppet continue to function) - this is a barrier to directly
>   consuming the kolla-kubernetes effort directly.
> 
> - A COE layer like kubernetes is a poor fit for deployments where operators
>   require strict control of service placement (e.g exactly which nodes a 
> service
>   runs on, IP address assignments to specific nodes etc) - this is already
>   a strong requirement for TripleO users and we need to figure out if/how
>   it's possible to control container 

Re: [openstack-dev] [kolla] xenial or trusty

2016-05-06 Thread Jesse Pretorius
On 4 May 2016 at 19:21, Emilien Macchi  wrote:

> On Wed, May 4, 2016 at 1:52 PM, Jeffrey Zhang 
> wrote:
> > I'd like to lock the tag version in certain branch. One branch only
> support
> > one
> > distro release.
> >
> > For example, the mitaka branch only build on Trusty and the master/newton
> > branch
> > only build on Xenial.
> >
> > So, the branch and OS matrix should like ( fix me and the ?)
> >
> >   Ubuntu CentOS Debian  OracleLinux
> > Liberty14.047  ? ?
> > Mitaka 14.047  ? ?
> > Master 16.047  ? ?
>
> FWIW, this is what we plan to do in Puppet OpenStack CI (except we
> don't gate on OracleLinux & Debian).


FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles, with
the current proposal to drop it in P. The intent is to provide our
deployers the opportunity to transition with a mixed deployment.

Obviously YMMV and our plans may change based on the actual implementation
details.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-06 Thread Andy McCrae
+1
Ship it.

Andy

On 6 May 2016 at 07:42, Jesse Pretorius  wrote:

> On 3 May 2016 at 23:59, Jim Rollenhagen  wrote:
>
>>
>> Sounds like a major win for the team!
>
>
> Haha! Quite!
>
> +1 for me on the proposal. I've spoken to Major and confirmed that he has
> the support from his employer to commit the time necessary on a consistent
> basis.
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-06 Thread Jesse Pretorius
On 3 May 2016 at 23:59, Jim Rollenhagen  wrote:

>
> Sounds like a major win for the team!


Haha! Quite!

+1 for me on the proposal. I've spoken to Major and confirmed that he has
the support from his employer to commit the time necessary on a consistent
basis.

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OSC transition

2016-05-06 Thread Richard Theis
"Na Zhu"  wrote on 05/04/2016 09:54:39 PM:

> From: "Na Zhu" 
> To: "OpenStack Development Mailing List \(not for usage questions\)"
> 
> Date: 05/04/2016 09:59 PM
> Subject: Re: [openstack-dev] [neutron] OSC transition
> 
> Hi Darek,
> 
> Thanks your information, but the BGP commands are not list in here 
> https://etherpad.openstack.org/p/osc-neutron-support:(

You are welcome to add the BGP commands to the list.

> 
> 
> 
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong 
> New District, Shanghai, China (201203)
> 
> 
> 
> From:Darek Smigiel 
> To:"OpenStack Development Mailing List (not for usage 
> questions)" 
> Date:2016/05/05 02:34
> Subject:Re: [openstack-dev] [neutron] OSC transition
> 
> 
> 
> 
> On May 4, 2016, at 6:10 AM, Na Zhu  wrote:
> 
> Hi Richard,
> 
> I read the contents in the link, I think the discussion in Austin 
> summit have not updated to the webpage.
> But from here https://etherpad.openstack.org/p/newton-neutron-
> future-neutron-client, it mentions python-neutronclient provides OSC
> plugin for neutron-*aas,
> does it mean all neutron-*aas CLIs still live in python-
> neutronclient repo? If yes, should every neutron-*aas owner updates 
> the CLIs from neutron to openstack?
> 
> I found Dean Troyer set the [Blueprint neutron-client] implement 
> neutron commandsstate to obsolete, does the OSC transition continue 
> move along?
> 
> 
> Transition is in progress. Here you have spec for it [1]. Probably 
> the most important thing for you is this [2] where all required 
> commands are described.
> 
> 
> [1] http://docs.openstack.org/developer/python-neutronclient/devref/
> transition_to_osc.html
> [2] https://etherpad.openstack.org/p/osc-neutron-support
> 
> Darek Smigiel (dasm)
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api-ref docs cleanup review sprint 5/9 and 5/11

2016-05-06 Thread Sean Dague
On 05/03/2016 04:12 PM, Matt Riedemann wrote:
> We discussed at the summit a need for a review sprint on the api-ref
> docs cleanup effort that's going on.  See Sean's email on that from a
> few weeks ago [1].
> 
> So we plan to do a review sprint next Monday 5/9 and Wednesday 5/11.
> 
> The series to review is here [2].
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092936.html
> [2] https://review.openstack.org/#/q/status:open+topic:bp/api-ref-in-rst

This is both a content writing sprint and a content reviewing sprint.
We've got 57 files that need to get 4 phases of processing. For this
exercise we're going to focus on the first 3, which are:

* method verification (are all REST methods specified and in a
consistent order)
* parameter verification (are all parameters listed for request and
response, and are they correct)
* example verification (do all requests / responses with bodies have
examples, and are those explained what is going on).

https://wiki.openstack.org/wiki/NovaAPIRef explains what is needed at
each step in detail.

There is also a burndown graph for this effort here:
http://burndown.dague.org/ to be able to see the progress as we go (it's
updated hourly).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Nova] Any Code Examples of Other Services Using Keystone Policy?

2016-05-06 Thread Sean Dague
On 05/05/2016 06:03 PM, Dan Smith wrote:
>> I'm currently working on the spec for Project ID Validation in Nova
>> using Keystone. The outcome of the Design Summit Session was that the
>> Nova service user would use the Keystone policy to establish whether the
>> requester had access to the project at all to verify the id. I was
>> wondering if there were any code examples of a non-Keystone service
>> using the Keystone policy in this way?
>>
>> Also if I misunderstood something, please feel free to correct me or to
>> clarify!
> 
> Just to clarify, the outcome as I understood it is:
> 
> /Instead/ of a Nova service user, Nova should use the credentials of the
> user doing the quota manipulation to authenticate a request to keystone
> to check for the presence of the target user. That means doing a HEAD or
> GET on the tenant in keystone using the credentials provided to Nova for
> the quota operation. The only Keystone policy involved is making sure
> that the user has permission to do that HEAD or GET operation (which is
> really just a deployment thing).

Right, that's how I remember it.

The important additional piece of information is these commands are Nova
admin commands, so setting quota for other users.

I think the important next step forward here is to actually see what the
code looks like, as the actual code to check against keystone is going
to go right here -
https://github.com/openstack/nova/blob/8a93fd13786358f882a53e0bf104eeed23541465/nova/api/openstack/compute/quota_sets.py#L107

And needs to function with what we have at hand, which is a project_id
and a nova.context.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - Design summit session on the future of Neutron architecture

2016-05-06 Thread Rossella Sblendido
Hi all,

in Austin we had a session to discuss the future of Neutron
architecture. You can find the etherpad here [1].

The first part was about agents. In the last releases we have been
factoring common code out and allowing pluggability: in Liberty L2 agent
extensions were introduced and in Mitaka a common agent framework. We
were wondering if it could be worth considering to move to a single
agent, where l2, l3, etc. are just roles that can be loaded according to
the configuration. We didn't reach any consensus, many people expressed
doubts regarding this approach so for now nothing will change.

The second part was about upgrades and specifically about the
introduction of Oslo VersionedObject in Neutron. The upgrade subteam is
taking care of this [2]. This work still requires lots of effort but
everybody agreed that it's needed. We decided that new features need to
adopt OVO straight away. The only exception is if a feature uses a
resource that is already in the code base and has not been ported to OVO
yet. As action items we need to figure out which features are in flight
and need to adopt OVO and we need to fill any gap in the documentation.
The upgrades team should make authors of patches in flight aware of the
new requirements and should deliver foundational bits to build features
upon. Some patches may still land without objects right now and it’s
advised to reach upgrades subteam for recommendations if in doubt.

To test upgrades with two Neutron servers we might want to have a
three-node testing. Assaf suggested that we can achieve the same results
modifying the job that currently uses two nodes since we can run compute
services (l2 agent, nova-compute) on the ‘new’ node in addition to the
‘old’ services, and have a connectivity test for instances explicitly
landed on different nodes.

cheers,

Rossella and Ihar


[1]
https://etherpad.openstack.org/p/newton-neutron-future-neutron-architecture
[2] https://wiki.openstack.org/wiki/Meetings/Neutron-Upgrades-Subteam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Ceph][Kingbird][Tricircle][Smaug]Build a multisite disaster recovery "stack"

2016-05-06 Thread Zhipeng Huang
Hi Folks,

I was referred to this talk https://www.youtube.com/watch?v=VWFYC6W71tY by
a colleague, and really think several projects should collaborate on this
subject.

Two projects that are missed from the talk but would be helpful are
Smaug[1] and Tricircle[2]. Smaug provides data protection services for VMs,
whereas Tricircle provides an single API entrance for multisite OpenStack
management (cross L2/L3 networking, volume migration/replication)

I think these projects could work together to address the issue :)

[1]https://wiki.openstack.org/wiki/Smaug
[2]https://wiki.openstack.org/wiki/Tricircle

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] [blueprint] Definition of strategies

2016-05-06 Thread joehuang
Good suggestion. Agree that using "when" is not so accurate.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:shinobu...@gmail.com] 
Sent: Friday, May 06, 2016 4:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [tricircle] [blueprint] Definition of strategies

Hi Chaoyi,

In the Tricircle's blueprint, there is a section:

 7.3  Resource Create Request Dispatch

In this section, strategy for each resource is described.
It would be better to elaborate on *WHEN* more:

 *before* or *during* or *after*

e.g.,
security group's strategy is defined as follows:

 "cache when created and sync to bottom when creating server"

It would be better to describe as below:

 "cache when created and sync to bottom during creating server"

What do you think?
Any comment, suggestion or whatever you have would be appreciated!

Cheers,
Shinobu

-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] [blueprint] Definition of strategies

2016-05-06 Thread Shinobu Kinjo
Hi Chaoyi,

In the Tricircle's blueprint, there is a section:

 7.3  Resource Create Request Dispatch

In this section, strategy for each resource is described.
It would be better to elaborate on *WHEN* more:

 *before* or *during* or *after*

e.g.,
security group's strategy is defined as follows:

 "cache when created and sync to bottom when creating server"

It would be better to describe as below:

 "cache when created and sync to bottom during creating server"

What do you think?
Any comment, suggestion or whatever you have would be appreciated!

Cheers,
Shinobu

-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification subteam meeting is cancelled next week

2016-05-06 Thread Balázs Gibizer
Hi, 

As we agreed on Tuesday on the subteam meeting[1] the next week 
meeting is skipped due to lack of chairman. Please use the normal
channels to sync during the week.
Next meeting will be held on 17th of May 17:00 UTC on #openstack-meeting-4 [2]

Cheers,
Gibi

[1] 
http://eavesdrop.openstack.org/meetings/nova_notification/2016/nova_notification.2016-05-03-17.00.log.html
 
[2] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160517T17 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] About the dynamic pod binding

2016-05-06 Thread Shinobu Kinjo
Yipei,

Probably you may be interested in the following section in Blueprint [1].

7.1 AZ and Pod Binding
 ...
7.x

[1] 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit#heading=h.x14kdooqhai1

Cheers,
S


On Fri, May 6, 2016 at 3:58 PM, Shinobu Kinjo  wrote:
> Yipei,
>
> According to Chaoyi, you have a bunch of experiences regarding to the
> OpenStack, networking which is awesome.
>
> I look forward to hearing from you soon.
>
> Cheers,
> Shinobu
>
> On Fri, May 6, 2016 at 12:05 PM, Yipei Niu  wrote:
>> Got it with thanks.
>>
>> Best regards,
>> Yipei
>>
>> On Fri, May 6, 2016 at 9:48 AM, joehuang  wrote:
>>>
>>> Hi, Yipei,
>>>
>>> Shinobu is correct, this should be taken into consideration in the design
>>> of dynamic pod binding.
>>>
>>> How to schedule pod, you can refer to host-aggregate scheduling with
>>> flavor, the difference is that the scheduling granularity is on pod level.
>>> By the tag in flavor extra-spec and volume type extrac_spec , tricircle can
>>> be aware of which type of resource the tenant wants to provision.
>>>
>>> For example:
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/host-aggregates.html
>>>
>>> So in
>>> https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit,
>>> one field called resource_affinity_tag is proposed to be added into the pod
>>> table, it could be used for scheduling purpose. But this is only one
>>> proposal, you may have better idea to do that, after the spec is reviewed
>>> and approved, the doc can be update to reflect the new idea.
>>>
>>> Best Regards
>>> Chaoyi Huang ( Joe Huang )
>>>
>>>
>>> -Original Message-
>>> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
>>> Sent: Friday, May 06, 2016 8:06 AM
>>> To: Yipei Niu
>>> Cc: OpenStack Development Mailing List (not for usage questions);
>>> joehuang; Zhiyuan Cai; 金城 忍
>>> Subject: Re: [tricircle] About the dynamic pod binding
>>>
>>> Hi Yipei,
>>>
>>> On Thu, May 5, 2016 at 9:54 PM, Yipei Niu  wrote:
>>> > Hi, all,
>>> >
>>> > For dynamic pod binding, I have some questions.
>>> >
>>> [snip]
>>> > 3. How is Tricircle aware of what type of resource wanted by tenants?
>>> > For example, a tenant wants to boot VMs for CAD modelling with
>>> > corresponding flavor. But in current code, the flavorRef is not get
>>> > involved in function get_pod_by_az_tenant, when querying pod bindings.
>>> > So do we need to modify the pod binding table to add such a column?
>>>
>>> Working through code bases, probably you are talking about future
>>> implementation, I guess.
>>>
>>> Cheers,
>>> Shinobu
>>>
>>> >
>>> > Best regards,
>>> > Yipei
>>>
>>>
>>>
>>> --
>>> Email:
>>> shin...@linux.com
>>> shin...@redhat.com
>>
>>
>
>
>
> --
> Email:
> shin...@linux.com
> shin...@redhat.com



-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-06 Thread Miguel Angel Ajo Pelayo
Sounds good,

   I started by opening a tiny RFE, that may help in the organization
of flows inside OVS agent, for inter operability of features (SFC,
TaaS, ovs fw, and even port trunking with just openflow). [1] [2]


[1] https://bugs.launchpad.net/neutron/+bug/1577791
[2] http://paste.openstack.org/show/495967/


On Fri, May 6, 2016 at 12:35 AM, Cathy Zhang  wrote:
> Hi everyone,
>
> We had a discussion on the two topics during the summit. Here is the etherpad 
> link for the discussion.
> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>
> We agreed to continue the discussion on Neutron channel on a weekly basis. It 
> seems UTC 1700 ~ UTC 1800 Tuesday is good for most people.
> Another option is UTC 1700 ~ UTC 1800 Friday.
>
> I will tentatively set the meeting time to UTC 1700 ~ UTC 1800 Tuesday. Hope 
> this time is good for all people who have interest and like to contribute to 
> this work. We plan to start the first meeting on May 17.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 21, 2016 11:43 AM
> To: Cathy Zhang; OpenStack Development Mailing List (not for usage 
> questions); Ihar Hrachyshka; Vikram Choudhary; Sean M. Collins; Haim Daniel; 
> Mathieu Rohon; Shaughnessy, David; Eichberger, German; Henry Fourie; 
> arma...@gmail.com; Miguel Angel Ajo; Reedip; Thierry Carrez
> Cc: Cathy Zhang
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Hi everyone,
>
> We have room 400 at 3:10pm on Thursday available for discussion of the two 
> topics.
> Another option is to use the common room with roundtables in "Salon C" during 
> Monday or Wednesday lunch time.
>
> Room 400 at 3:10pm is a closed room while the Salon C is a big open room 
> which can host 500 people.
>
> I am Ok with either option. Let me know if anyone has a strong preference.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 14, 2016 1:23 PM
> To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
> Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
> Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry 
> Fourie; 'arma...@gmail.com'
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Thanks for everyone's reply!
>
> Here is the summary based on the replies I received:
>
> 1.  We should have a meet-up for these two topics. The "to" list are the 
> people who have interest in these topics.
> I am thinking about around lunch time on Tuesday or Wednesday since some 
> of us will fly back on Friday morning/noon.
> If this time is OK with everyone, I will find a place and let you know 
> where and what time to meet.
>
> 2.  There is a bug opened for the QoS Flow Classifier 
> https://bugs.launchpad.net/neutron/+bug/1527671
> We can either change the bug title and modify the bug details or start with a 
> new one for the common FC which provides info on all requirements needed by 
> all relevant use cases. There is a bug opened for OVS agent extension 
> https://bugs.launchpad.net/neutron/+bug/1517903
>
> 3.  There are some very rough, ugly as Sean put it:-), and preliminary work 
> on common FC https://github.com/openstack/neutron-classifier which we can see 
> how to leverage. There is also a SFC API spec which covers the FC API for SFC 
> usage 
> https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
> the following is the CLI version of the Flow Classifier for your reference:
>
> neutron flow-classifier-create [-h]
> [--description ]
> [--protocol ]
> [--ethertype ]
> [--source-port : protocol port>]
> [--destination-port : destination protocol port>]
> [--source-ip-prefix ]
> [--destination-ip-prefix ]
> [--logical-source-port ]
> [--logical-destination-port ]
> [--l7-parameters ] FLOW-CLASSIFIER-NAME
>
> The corresponding code is here 
> https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions
>
> 4.  We should come up with a formal Neutron spec for FC and another one for 
> OVS Agent extension and get everyone's review and approval. Here is the 
> etherpad catching our previous requirement discussion on OVS agent (Thanks 
> David for the link! I remember we had this discussion before) 
> https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
>
>
> More inline.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Thursday, April 14, 2016 3:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Cathy Zhang  wrote:
>
>> Hi everyone,

Re: [openstack-dev] [neutron][FWaaS] __init__ arguments issue status

2016-05-06 Thread Sridar Kandaswamy (skandasw)
Hi All:

In digging thru this, yes with the neutron change, it changed the MRO as below 
and thus the issue.


(,
 , , , 
, ,

<—— the issue is at this point where we have a mismatch with args

,
 ,
 )


Nate, Margaret – thanks for digging thru this – lets get together during the 
day to discuss this more. Margaret, to answer ur question – it worked before 
due to a favorable ordering with the older hacked inheritance relationship. We 
can find a way to fix this in fwaas but more importantly need to get some 
missing pieces in to the Observer Hierarchy patch set as well.


Thanks


Sridar

From: Doug Wiegley 
>
Reply-To: OpenStack List 
>
Date: Thursday, May 5, 2016 at 9:40 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][FWaaS] __init__ arguments issue status

This break is almost certainly because of the following neutron change, to 
unwind the incestuous inheritance that was in neutron (dependency arrow was 
circular):

https://review.openstack.org/#/c/223343/

I don’t expect there will be a lot of appetite to revert that, so it will need 
to be addressed in neutron-fwaas. Likely it should’ve had an ML warning first, 
sorry about that, this has been a longstanding issue.

doug



On May 5, 2016, at 7:00 PM, Frances, Margaret 
> 
wrote:

Hi Doug.

The old and new MROs are both pretty complicated, and it’s not entirely clear 
to me yet why the original one worked. (The MROs are included below for reading 
pleasure; they're embellished to show the incoming args to self’s init and 
outgoing args to super’s init in each case.)

I’m fairly sure the APIs for the mixins can be made the same, and I’ll try 
that.  But I still wonder if in fact the problem is a base class ordering issue.

The error that 223343 produced occurs in method call #6 in the "AFTER" MRO, 
where we get the following trace:

super(PeriodicTasks, self).__init__()
TypeError: __init__() takes exactly 2 arguments (1 given)


For grins, we changed PeriodicTasks’s call to super init as suggested by the 
trace:

super(PeriodicTasks, self).__init__(conf)


At this point FWaaSAgentRpcCallbackMixin (AFTER, #8) complained:

super(FWaaSAgentRpcCallbackMixin, self).__init__(host)
TypeError: object.__init__() takes no parameters


Changing *that* class as suggested elicited the following (to me baffling) 
result:

super(FWaaSAgentRpcCallbackMixin, self).__init__()
TypeError: __init__() takes exactly 2 arguments (1 given)


I find it baffling because FWaaSAgentRpcCallbackMixin is the end of the line, 
it’s a subclass of object, and object doesn’t allow arguments to init (so whose 
init is that? that’s the next thing I’m going to look at).  (It’s for these 
same reasons that I don’t understand why things worked before the 223343 
change.)

I’m still looking at things.  (And learning about MRO, which I’ve never really 
dealt with before.)  Will run pdb and see what surfaces.

Thanks for your help.  Thoughts, comments, suggestions all welcome.
Margaret


BEFORE 223343
 1. varmour_router_vArmourL3NATAgent (host, conf)-->(host, conf)
 2. agent_L3NATAgent  (host, conf)-->(conf)
 3. firewall_l3_agent_FWaaSL3AgentRpcCallback (conf)-->(host)
 4. api_FWaaSAgentRpcCallbackMixin (host)-->(host)
 5. ha_AgentMixin (host)-->(host)
 6. dvr_AgentMixin (host)-->(host)
 7. manager_Manager (host)-->(conf)
 8. periodic_task_PeriodicTasks(conf)-->()
 9. firewall_l3_agent_FWaaSL3AgentRpcCallback(conf)-->(host)
10. api_FWaaSAgentRpcCallbackMixin(host)-->(host)
11. object

AFTER 223343
 1. varmour_router_vArmourL3NATAgent (host, conf)-->(host, conf)
 2. agent_L3NATAgent  (host, conf)-->(host)
 3. ha_AgentMixin (host)-->(host)
 4. dvr_AgentMixin (host)-->(host)
 5. manager_Manager (host)-->(conf)
 6. periodic_task_PeriodicTasks(conf)-->()
 7. firewall_l3_agent_FWaaSL3AgentRpcCallback (conf)-->(host)
 8. api_FWaaSAgentRpcCallbackMixin(host)-->(host)
 9. object

--
Margaret Frances
Eng 4, Prodt Dev Engineering



On May 5, 2016, at 7:06 PM, Doug Hellmann 
> wrote:

Excerpts from Nate Johnston's message of 2016-05-05 17:40:13 -0400:
FWaaS team,

After a day of looking at the tests currently failing in the FWaaS repo, I
believe I have the issue narrowed down considerably. First, to restate what
is going on.  If you check out the neutron-fwaas repository and run `tox -e
py27` in it, you will get six errors all in the
neutron_fwaas.tests.unit.services.firewall.agents.varmour.test_varmour_router.TestVarmourRouter
section.
Running the py34 tests results in similar problems.  The failures follow
the following form:

Captured traceback:

~~~

   Traceback (most recent call last):

 File

[openstack-dev] [Group-based-policy] Service Chain work with LBaaS/FWaaS

2016-05-06 Thread 姚威
Hi all,


I know that GBP can work with neutron(ml2) by resource_mapping, and 
group/policy all work well.
Assume that I have installed and enabled LBaaS and FWaaS,can I use service 
chain of gbp by `chain_mapping` or other plugins ? 


Another question. I use GBP and Cisco APIC as native driver, what is the GBP 
service chain work flow? Such as create a service spec/node and apply it to a 
rule.


I have searched over Internet, less reference and discussion.




Thanks


Yao Wei__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] About the dynamic pod binding

2016-05-06 Thread Shinobu Kinjo
Yipei,

According to Chaoyi, you have a bunch of experiences regarding to the
OpenStack, networking which is awesome.

I look forward to hearing from you soon.

Cheers,
Shinobu

On Fri, May 6, 2016 at 12:05 PM, Yipei Niu  wrote:
> Got it with thanks.
>
> Best regards,
> Yipei
>
> On Fri, May 6, 2016 at 9:48 AM, joehuang  wrote:
>>
>> Hi, Yipei,
>>
>> Shinobu is correct, this should be taken into consideration in the design
>> of dynamic pod binding.
>>
>> How to schedule pod, you can refer to host-aggregate scheduling with
>> flavor, the difference is that the scheduling granularity is on pod level.
>> By the tag in flavor extra-spec and volume type extrac_spec , tricircle can
>> be aware of which type of resource the tenant wants to provision.
>>
>> For example:
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/host-aggregates.html
>>
>> So in
>> https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit,
>> one field called resource_affinity_tag is proposed to be added into the pod
>> table, it could be used for scheduling purpose. But this is only one
>> proposal, you may have better idea to do that, after the spec is reviewed
>> and approved, the doc can be update to reflect the new idea.
>>
>> Best Regards
>> Chaoyi Huang ( Joe Huang )
>>
>>
>> -Original Message-
>> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
>> Sent: Friday, May 06, 2016 8:06 AM
>> To: Yipei Niu
>> Cc: OpenStack Development Mailing List (not for usage questions);
>> joehuang; Zhiyuan Cai; 金城 忍
>> Subject: Re: [tricircle] About the dynamic pod binding
>>
>> Hi Yipei,
>>
>> On Thu, May 5, 2016 at 9:54 PM, Yipei Niu  wrote:
>> > Hi, all,
>> >
>> > For dynamic pod binding, I have some questions.
>> >
>> [snip]
>> > 3. How is Tricircle aware of what type of resource wanted by tenants?
>> > For example, a tenant wants to boot VMs for CAD modelling with
>> > corresponding flavor. But in current code, the flavorRef is not get
>> > involved in function get_pod_by_az_tenant, when querying pod bindings.
>> > So do we need to modify the pod binding table to add such a column?
>>
>> Working through code bases, probably you are talking about future
>> implementation, I guess.
>>
>> Cheers,
>> Shinobu
>>
>> >
>> > Best regards,
>> > Yipei
>>
>>
>>
>> --
>> Email:
>> shin...@linux.com
>> shin...@redhat.com
>
>



-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Kolla rpm distribution

2016-05-06 Thread hu . zhijiang
Hi,

One of our application would like to use Kolla as an upstream deployment 
tools. As the application may run in the environment without internet 
connections, we are trying to packaging Kolla as well as its requirements, 
such as jinja2, into rpm packages and deliver them along with the 
application. We would like to get some advises about: 
1) Is it the right way to go for our application to build rpms for 
upstream python packages?
2) Is there any plan for Kolla project to implement rpm packaging. As we 
are working on that, I think we can do some contributions. 


Thank you,
Zhijiang

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 6 May 2016

2016-05-06 Thread Lana Brindley
Hi everyone,

I hope you all had a safe journey home from Summit, and are now fully recovered 
from all the excitement (and jetlag)! I'm really pleased with the amount of 
progress we made this time around. We have a definitive set of goals for 
Newton, and I'm confident that they're all moving us towards a much better docs 
suite overall. Of course, the biggest and most important work we have to do is 
to get our Install Guide changes underway. I'm very excited to see the new 
method for documenting OpenStack installation, and can't wait to see all our 
big tent projects contributing to docs in such a meaningful way. Thank you to 
everyone (in the room and online) who contributed to the Install Guide 
discussion, and helped us move forward on this important project.

In other news, I've written a wrapup of the Austin design summit on my blog, 
which you might be interested in: 
http://lanabrindley.com/2016/05/05/openstack-newton-summit-docs-wrapup/

== Progress towards Newton ==

152 days to go!

Bugs closed so far: 61

Because we have such a specific set of deliverables carved out for Newton, I've 
made them their own wiki page: 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release. I will also do my best to ensure it's kept up to date 
for each newsletter.

One of the first tasks we've started work on after Summit is moving the Ops and 
HA Guides out of their own repositories and into openstack-manuals. As a 
result, those repositories are now frozen, and any work you want to do on those 
books should be in openstack-manuals. 

We are almost ready to publish the new RST version of the Ops Guide, there's 
just a few cleanup edits going in now, so make sure you have the right book, in 
the right repo from now on. This was our very last book remaining in DocBook 
XML, so the docs toolchain will be removing DocBook XML support. See spec 
https://review.openstack.org/311698 for details.

Another migration note is that the API reference content is moving from 
api-site to project specific repositories and api-site is now frozen. For more 
detail, see Anne's email: 
http://lists.openstack.org/pipermail/openstack-docs/2016-May/008536.html

== Mitaka wrapup ==

We performed a Mitaka retrospective at Summit, notes are here: 
https://etherpad.openstack.org/p/austin-docs-mitakaretro

In particular, I'd like to call out our hard working tools team Andreas and 
Christian, all our Speciality Team leads, and the Mitaka release managers Brian 
and Olga. Well done on a very successful release, everyone :)

Total bugs closed: 645

== Site Stats ==

Thanks to the lovely people at Foundation (thanks Allison!) I now have access 
to more stats than I could possibly guess what to do with, and I'm hoping to be 
able to share some of these with you through the newsletter. If there's 
something in particular you would like to see, then please let me know and I'll 
endeavour to record it here!

So far I can tell you that docs.openstack.org had 1.63M unique pageviews in 
April, down slightly from 1.72M in March, and the average session duration is 
just over six minutes, looking at just under 4 pages per session.

== Doc team meeting ==

Next meetings:

We'll be restarting the meeting series next week.

Next meetings:
US: Wednesday 11 April, 19:00 UTC
APAC: Wednesday 18 April, 00:30 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#6_May_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev