Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi KaiQiang,

Thank you for your reply.

As for 1), You are correct in that Magnum does support 2 flavors(one is for 
master node and the other is for minion nodes).  What I want to address is 
whether we should support 2 or N Nova flavors ONLY for minion nodes.

As for 2), We have made Magnum templates works with Ironic(only for 
Fedora/Atomic/Kubernetes) to create a Magnun bay of Kubernetess and uses the 
flat network for now (as, for now Ironic doesn’t support VLAN network) in our 
proto environment. Currently we just use Heat template(Resource Group) -> 
Nova:Server -> Ironic driver as Nova hypervisor to implement it.

Regards,
Gary

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: Wednesday, April 20, 2016 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
provision minion nodes


Hi Duan Li,

Not sure if I get your point very clearly.

1> Magnum did support :
https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/baymodel.py#L65

flavor-id for minion node
master-flavor-id for master node

So your K8s cluster could have such two kinds of flavors.


2> For one question about ironic case (I found you deploy on ironic), I did not 
think Magnum templates now support ironic case now.
As ironic VLAN related feature are still developing, and not merged(many 
patches are under review, pick one for example 
https://review.openstack.org/#/c/277853)


I am not sure how would you use ironic for k8s cluster ?

Also in this summit 
https://etherpad.openstack.org/p/magnum-newton-design-summit-topics, we will 
have session about ironic cases:
here it is : Ironic Integration: Add support for Ironic virt-driver

If you had ways to make ironic work with Magnum, we welcome your contribution 
for that topic.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for "Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)" 
---20/04/2016 03:46:18 pm---Hi Folks, We are considerin]"Duan, Li-Gong (Gary, 
HPServers-Core-OE-PSC)" ---20/04/2016 03:46:18 pm---Hi Folks, We are 
considering whether Magnum can supports 2 Nova flavors to provision Kubernetes 
and

From: "Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)" 
>
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 20/04/2016 03:46 pm
Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision 
minion nodes





Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision 
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:
- There are 2 kind of baremetal machines in customer site: one is legacy 
machines which doesn’t support UEFI secure boot and others are new machines 
which support UEFI secure boot. User want to use Magnum to provisions a Magnum 
bay of Kubernetes from these 2 kind of baremetal machines and for the machines 
supporting secure boot, user wants to use UEFI secure boot to boot them up. And 
2 Kubernetes label(secure-booted and non-secure-booted) are created and User 
can deploy their data-senstive/cirtical workload/containers/pods on the 
baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is “extra_spec: 
secure_boot=True” and the other doesn’t specify it) based on the Ironic 
feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it is 
reasonable from your point? If you agree, we can write design spec and 
implement this feature?

Regards,
Gary__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] stable/kilo devstack fails with pkg_resources.ContextualVersionConflict

2016-04-20 Thread Tony Breeds
On Tue, Apr 19, 2016 at 01:14:34PM +0200, Lajos Katona wrote:
> Hi,
> 
> In our internal CI system we realized that stable/kilo devstack fails with
> the following stack trace:



> It seems that the root cause is that testresources has a new version 2.0.0
> from 18 April.
> 
> I tried to find similar errors on openstack infra, but for me
> http://logstash.openstack.org/ gives back no result.
> 
> I have a patch in requirements (https://review.openstack.org/307174) but I
> got the same error for those runs.

Thanks for you help!
 
> Could somebody help to find a solution for this?

This should be resolved now, with the release of oslo.db 1.7.5

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Clark Boylan
On Wed, Apr 20, 2016, at 08:44 PM, Tony Breeds wrote:
> On Thu, Apr 21, 2016 at 02:09:24PM +1200, Robert Collins wrote:
> 
> > I also argued at the time that we should aim for entirely automated
> > check-and-update. This has stalled on not figuring out how to run e.g.
> > Neutron unit tests against requirements changes - our coverage is just
> > too low at the moment to proceed further down the automation path.
> 
> I thought we knew how to do this just is hadn't been done.  I *think*
> mostly
> because it's a largish project-config change.

It isn't too bad, I went ahead and pushed
https://review.openstack.org/308739 up which *should* do it (but Andreas
will likely point out something I overlooked). It is made easier by the
fact that already mostly have an integration test between requirements
and unittests for every project using the python unittest template. I
just had to make a small adjustment to how the repos are configured.

> Aiming to entirely automated is great but getting it to the point that we
> run
> (say) the nova, neutron, keystone, swift and horizon unit tests on *all*
> changes to upper-constraints would be fantastic and something I'm keen to
> work
> on during newton (as I suspect others are)
> 
> On a tangent, we also need to get wider adoption for constraints, I admit
> I
> wasn't paying close attention but I thought this was basically the
> default.  It
> seems I was wrong :(
> 
> Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-20 Thread Edgar Magana
Indeed it will be a terrific contribution.

Edgar


On Apr 20, 2016, at 4:10 AM, Dina Belova 
> wrote:

Folks,

I think Ann's report is super cool and 100% worth publishing on OpenStack 
performance-docs. This 
is really good information to share community-wide.

Ann, please think if you would like to contribute to performance documentation.

Cheers,
Dina

On Wed, Apr 20, 2016 at 12:34 PM, Anna Kamyshnikova 
> wrote:
Unfortunately, I won't attend summit in Austin, that is why I decided to 
present these results in the mailing list instead.

On Tue, Apr 19, 2016 at 7:29 PM, Edgar Magana 
> wrote:
Is there any session presenting these results during the Summit? It will be 
awesome to have a session on this. I could extend the invite to the Ops 
Meet-up. We have a section on lighting talks where the team will be very 
intesreted in learning from your testing.

Edgar

From: Anna Kamyshnikova 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, April 19, 2016 at 5:30 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Neutron] L3 HA testing on scale

>I would definitely like to see how these results are effected by
>https://review.openstack.org/#/c/305774/ but understandably 49
>physical nodes are hard to come by.

Yes, I'm planning to check how situation will change with all recent fixes, but 
I will be able to do this in May or later.

>About testing on scale it's not so problematic because of the Cloud For All 
>project.
>Here [1] you can request for a multi node cluster which you can use to
>perform tests. Exact requirements are specified on that website.

[1] http://osic.org

Thanks for pointing this!

>It's a great report, thanks for sharing that! Do you plan to run similar
>scale tests on other scenarios e.g. dvr?

Thanks! I have testing L3 HA + DVR in plans.

P. S.

I've updated environment description in report with some details.

On Tue, Apr 19, 2016 at 12:52 PM, Rossella Sblendido 
> wrote:


On 04/18/2016 04:15 PM, Anna Kamyshnikova wrote:
> Hi guys!
>
> As a developer I use Devstack or multinode OpenStack installation (4-5
> nodes) for work, but these are "abstract" environments, where you are
> not able to perform some scenarios as your machine is not powerful
> enough. But it is really important to understand the issues that real
> deployments have.
>
> Recently I've performed testing of L3 HA on the scale environment 49
> nodes (3 controllers, 46 computes) Fuel 8.0. On this environment I ran
> shaker and rally tests and also performed some
> manual destructive scenarios. I think that this is very important to
> share these results. Ideally, I think that we should collect statistics
> for different configurations each release to compare and check it to
> make sure that we are heading the right way.
>
> The results of shaker and rally tests [1]. I put detailed report in
> google doc [2]. I would appreciate all comments on these results.

It's a great report, thanks for sharing that! Do you plan to run similar
scale tests on other scenarios e.g. dvr?

Rossella

>
> [1] - http://akamyshnikova.github.io/neutron-benchmark-results/
> [2]
> - 
> https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing
>
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Ann Kamyshnikova
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Ann 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Tony Breeds
On Thu, Apr 21, 2016 at 02:09:24PM +1200, Robert Collins wrote:

> I also argued at the time that we should aim for entirely automated
> check-and-update. This has stalled on not figuring out how to run e.g.
> Neutron unit tests against requirements changes - our coverage is just
> too low at the moment to proceed further down the automation path.

I thought we knew how to do this just is hadn't been done.  I *think* mostly
because it's a largish project-config change.

Aiming to entirely automated is great but getting it to the point that we run
(say) the nova, neutron, keystone, swift and horizon unit tests on *all*
changes to upper-constraints would be fantastic and something I'm keen to work
on during newton (as I suspect others are)

On a tangent, we also need to get wider adoption for constraints, I admit I
wasn't paying close attention but I thought this was basically the default.  It
seems I was wrong :(

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] summit tools

2016-04-20 Thread David Stanek
On Wed, Apr 20, 2016 at 12:14 PM Neil Jerram 
wrote:

> A couple of questions about our Austin-related planning tools...
>
> - Can one's calendar at
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25
> be exported as .ics, or otherwise integrated into a wider calendaring
> system?
>

I put together a tool[1] that can do this. There is some manual work
involved because you have to save the print view of your calendar from your
browser.

It was a quick and dirty hackso patches accepted and encouraged.

1. https://gist.github.com/dstanek/83d536af9461eb26d7a19ff73a7391c2

-- David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] weekly meeting on apr 21st

2016-04-20 Thread Nikhil Komawar
Just wanted to let everyone know:

I am officially cancelling the meeting for Apr 21st (no agenda).

Also, we won't have weekly meeting during the summit week ie on Apr 28th.

Next meeting will be on May 5th. See you all online then.

On 4/20/16 1:27 PM, Nikhil Komawar wrote:
> Hi all,
>
> Last week when I asked if we needed a meeting for this week, the poll
> [1] resulted in "maybe". I currently do not see any 'specific' agenda
> [2] items posted for this week's meeting. I am assuming everyone is busy
> going into the summit and the updates can be shared then or the meeting
> after.
>
> If anyone wants to have the meeting, please add your items now. If none
> exist by 23:59 UTC today, I will cancel the meeting.
>
> [1]
> http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-04-14-14.00.log.html#l-82
> [2] https://etherpad.openstack.org/p/glance-team-meeting-agenda
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-sfc] A standards-compliant SFC API

2016-04-20 Thread Vikram Choudhary
Just a quick glance over the proposal seems like the networking-sfc also
does the same. In addition, networking-sfc is successfully integrated with
ONOS[1] and planned for ODL[2], OVN [3] & Tacker[4] (without any issues
with the existing API's so far). In addition, If we feel the existing
networking-sfc API's has issues then let's enhance them rather than a fresh
effort from the scratch.

Let's discuss more about the proposal over the submitted spec.

[1]
https://github.com/openstack/networking-onos/blob/master/doc/source/devref/sfc_driver.rst
[2] https://review.openstack.org/#/c/300898/
[3]
https://blueprints.launchpad.net/networking-sfc/+spec/networking-sfc-ovn-driver
[4]
https://blueprints.launchpad.net/networking-sfc/+spec/tacker-networking-sfc-driver


On Thu, Apr 21, 2016 at 1:24 AM, Duarte Cardoso, Igor <
igor.duarte.card...@intel.com> wrote:

> Thanks for the feedback Armando,
>
>
>
> Adding missing tag.
>
>
>
> Best regards,
>
> Igor.
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Wednesday, April 20, 2016 6:03 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron][sfc] A standards-compliant SFC
> API
>
>
>
>
>
> On 20 April 2016 at 09:31, Duarte Cardoso, Igor <
> igor.duarte.card...@intel.com> wrote:
>
> Dear OpenStack Community,
>
>
>
> We've been investigating options in/around OpenStack for supporting
> Service Function Chaining. The networking-sfc project has made significant
> progress in this space, and we see lots of value in what has been
> completed. However, when we looked at the related IETF specs on SFC we
> concluded that there would be value in further developing an SFC API and
> related classification functionality to enhance the alignment between the
> work in the OpenStack community with the standards work. We would like to
> propose the SFC part as a potential networking-sfc v2 API, but are open to
> other options too based on your feedback.
>
>
>
> I have submitted a spec to the neutron-specs repo [1], where you can check
> what our initial thoughts for this new API are, and provide your feedback
> or questions regarding the same.
>
>
>
> Your thoughts on this are deeply appreciated. We are looking forward to
> having further discussions with everyone interested in giving feedback or
> establishing collaborations during the OpenStack Summit in Austin.
>
>
>
> [1] https://review.openstack.org/#/c/308453
>
>
>
> Thanks for reaching out.
>
>
>
> The networking-sfc initiative so far has been pretty autonomous. The
> project has its own launchpad project [1] and its own docs to document APIs
> and proposals [2]. During the long journey that Neutron has been through,
> we have been adjusting how to manage the project in order to strike a good
> balance between development agility, product stability and community needs.
> We're always looking forward to improving that balance and this means that
> how we track certain initiatives may evolve in the future. For now, it's
> probably best to target the mailing list with tag [networking-sfc] (in
> addition to neutron), as well as the project noted below.
>
>
>
> [1] https://launchpad.net/networking-sfc
>
> [2] http://docs.openstack.org/developer/networking-sfc/
>
>
>
>
>
> Thank you,
>
> Igor & the Intel OpenStack networking team.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty - problem with assignment LDAP backend - Groups

2016-04-20 Thread Adam Young

On 04/20/2016 09:10 PM, Dmitry Sutyagin wrote:
Another correction - the issue is observed in Kilo, not Liberty, sorry 
for messing this up. (though this part of the code is identical in L)


On Wed, Apr 20, 2016 at 5:50 PM, Dmitry Sutyagin 
> wrote:


Correction:

group_dns = [u'CN=GroupX,OU=Groups,OU=SomeOU,DC=zzz']
ra.user_dn.upper() = 'CN=GROUPX,OU=GROUPS,OU=SOMEOU,DC=ZZZ'

So this could work if only:
- string in group_dns was str, not unicode
- text was uppercase

Now the question is - should it be so?

On Wed, Apr 20, 2016 at 5:41 PM, Dmitry Sutyagin
> wrote:

Hi everybody,

I am observing the following issue:

LDAP backend is enabled for identity and assignment, domain
specific configs disabled.
LDAP section configured - users, groups, projects and roles
are mapped.
I am able to use identity v3 api to list users, groups, to
verify that a user is in a group, and also to view role
assignments - everythings looks correct so far.
I am able to create a role for user in LDAP and if I put a
user directly into a role, everything works.
But when I put a group (which contains that user) into a role
- the user get's 401.

I have found a spot in the code which causes the issue:


https://github.com/openstack/keystone/blob/stable/liberty/keystone/assignment/backends/ldap.py#L67

This check returns False, here is why:
===
group_dns = ['cn=GroupX,ou=Groups,ou=YYY,dc=...']
role_assignment.user_dn = 'cn=UserX,ou=Users,ou=YYY,dc=...'
===

Therefore the check:

if role_assignment.user_dn.upper() in group_dns

Will return false. I do not understand how this should work -
why should user_dn match group_dn?



I would not advise using the LDAP assignment backend, but rather use 
LDAP for identity, and put assignments in SQL.  LDAP assignments was 
deprecated a few releases ago and has since been removed.





-- 
Yours sincerely,

Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.




-- 
Yours sincerely,

Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.




--
Yours sincerely,
Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 18 April 2016 at 03:13, Doug Hellmann  wrote:
> I am organizing a summit session for the cross-project track to
> (re)consider how we manage our list of global dependencies [1].
> Some of the changes I propose would have a big impact, and so I
> want to ensure everyone doing packaging work for distros is available
> for the discussion. Please review the etherpad [2] and pass the
> information along to colleagues who might be interested.


Thanks for kicking this off Doug. Its a great topic - as the thread shows :).

I have a few thoughts - and I fully intend to be at the session as
well. I don't know if I'm pro or con the specific proposal today - and
I definitely need to understand the details of the issue a bit better,
my focus has been on various testing and packaging things for a bit -
I've neglected my requirements reviews except when prompted - sorry.

I think that federated constraints/requirements raise some risks with
multi-project gating jobs. This is separate to the co-installability
requirement and instead due to the ability to end up with a multi-tree
wedge. If something happens atomically that breaks two projects
constraints at the same time, two distinct git changes are required to
fix that. AIUI this happens maybe 1/8 weeks? In a centralised model we
can fix that atomically within the normal CI workflow. With a
federated approach, we will have to get infra intervention. Similarly,
if there is a needle-threading situation that can end up with multiple
projects broken at the same time, and they consume each other (or both
are present in devstack jobs for the other) we can wedge. I'm thinking
e.g. changes to Nova and Neutron go through, independently but the
combination turns out to be API incompatible on the callbacks between
services or some such. Perhaps too niche to worry about?

Co-installability has very significant impact on the backwards compat
discussion: its a major driver of the need I perceive for library
backwards compatibility (outside of client library compat with older
clouds) and I for one think we could make a bunch of stuff simpler
with a reduced co-installability story.
https://review.openstack.org/#/c/226157/ and
https://etherpad.openstack.org/p/newton-backwards-compat-libs

I'm super worried about the impact on legacy distributions though - I
don't think they're ready for it, and I don't think we're important
enough to act as a sane forcing function: but perhaps we can find some
compromise that works for everyone - or at least get distros to commit
to moving ahead in their view of the world :).

I don't think we can ditch co-installability per se though, even in a
totally containerised world: we still have the need to make each leaf
artifact in the dependency tree co-installable with all its
dependencies. That is, we can't get to a situation where
oslo.messaging and oslo.db are not co-installable, even though they
don't depend on each other, because Nova depends on both and we want
to be able to actually install Nova.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 20 April 2016 at 05:44, Clint Byrum  wrote:
> Excerpts from Michał Jastrzębski's message of 2016-04-18 10:29:20 -0700:
>> What I meant is if you have liberty Nova and liberty Cinder, and you
>> want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
>> Cinder which was liberty either needs to be upgraded or is broken,
>> therefore during upgrade you need to do cinder and nova at the same
>> time. DB can be snapshotted for rollbacks.
>>
>
> If we're breaking backward compatibility even across one release, that
> is a bug.  You should be able to run Liberty components with Mitaka
> Libraries. Unfortunately, the testing matrix for all of the combinations
> is huge and nobody is suggesting we try to solve that equation.

Sadly no: we don't make that guarantee today. I think we should, but
there isn't consensus - at least amongst the folk that have been
debating the backwards compat for libraries spec - that it is actually
*desirable*. Please, come to the session and help build consensus in
Austin :).

> However, to the point of distros: partial upgrades is not the model distro
> packages work under. They upgrade what they can, whether they're a rolling
> release, or 7 year cycle LTS's. When the operator says "give me the new
> release", the packages that can be upgraded, will be upgraded. And if
> Mitaka Nova is depending on something outside the upper constraints in
> another package on the system, the distro will just hold Nova back.

And presumably all of OpenStack.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday April 21st at 9:00 UTC

2016-04-20 Thread GHANSHYAM MANN
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, April 21st at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_April_21st_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the
next meeting will be at:

04:00 EST
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT

Regards
Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 20 April 2016 at 04:47, Clark Boylan  wrote:
> On Tue, Apr 19, 2016, at 08:14 AM, Doug Hellmann wrote:
>> Excerpts from Jeremy Stanley's message of 2016-04-19 15:00:24 +:
>> > On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
>> > [...]
>> > > We have the global list and the upper constraints list, and both
>> > > are intended to be used to signal to packaging folks what we think
>> > > ought to be used. I'm glad that signaling is working, and maybe
>> > > that means you're right that we don't need to sync the list
>> > > absolutely, just as a set of "compatible" ranges.
>> > [...]
>> >
>> > When we were firming up the constraints idea in Vancouver, if my
>> > memory is correct (which it quite often is not these days), part of
>> > the long tail Robert suggested was that once constraints usage in
>> > the CI is widespread we could consider resolving it from individual
>> > requirements lists in participating projects, drop the version
>> > specifiers from the global requirements list entirely and stop
>> > trying to actively synchronize requirement version ranges in
>> > individual projects. I don't recall any objection from those of us
>> > around the table, though it was a small ad-hoc group and we
>> > certainly didn't dig too deeply into the potential caveats that
>> > might imply.
>>
>> I have no memory of that part of the conversation, but I'll take your
>> word for it.
>>
>> If I understand your description correctly, that may be another
>> alternative. Most of the reviews I've been doing are related to the
>> constraints, though, so I'm not really sure it lowers the amount of work
>> I'm seeing.
>
> This was one of my concerns with constraints when we put them in place.
> Previously we would open requirements and things would break
> periodically and we would address them. With constraints every single
> requirements update whether centralized or decentralized needs to be
> reviewed. It does add quite a bit of overhead.
>
> The argument at the time was that the time saved by not having the gate
> explode every few weeks would offset the cost of micromanaging every
> constraint update.

I also argued at the time that we should aim for entirely automated
check-and-update. This has stalled on not figuring out how to run e.g.
Neutron unit tests against requirements changes - our coverage is just
too low at the moment to proceed further down the automation path.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 20 April 2016 at 03:00, Jeremy Stanley  wrote:
> On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
>> We have the global list and the upper constraints list, and both
>> are intended to be used to signal to packaging folks what we think
>> ought to be used. I'm glad that signaling is working, and maybe
>> that means you're right that we don't need to sync the list
>> absolutely, just as a set of "compatible" ranges.
> [...]
>
> When we were firming up the constraints idea in Vancouver, if my
> memory is correct (which it quite often is not these days), part of
> the long tail Robert suggested was that once constraints usage in
> the CI is widespread we could consider resolving it from individual
> requirements lists in participating projects, drop the version
> specifiers from the global requirements list entirely and stop
> trying to actively synchronize requirement version ranges in
> individual projects. I don't recall any objection from those of us
> around the table, though it was a small ad-hoc group and we
> certainly didn't dig too deeply into the potential caveats that
> might imply.

I think I suggested that we could remove the *versions* from
global-requirements. Constraints being in a single place is a
necessary tool unless (we have atomic-multi-branch commits via zuul ||
we never depend on two projects agreeing on compatible versions of
libraries in the CI jobs that run for any given project).

Constraints being in a single place (not necessarily a single file)
allows to fix multi-project wedging issues with a single git commit.
Atomic multi-branch commits in zuul would allow us to fix
multi-project wedging issues if constraints are federated out to
multiple trees.
Never needing any two projects to agree on compatible versions in CI
would allow us to change things without triggering a wedge...
possibly. *detailed* thought needed here - because consider for
instance the impact of a removed release on PyPI.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty - problem with assignment LDAP backend - Groups

2016-04-20 Thread Dmitry Sutyagin
Another correction - the issue is observed in Kilo, not Liberty, sorry for
messing this up. (though this part of the code is identical in L)

On Wed, Apr 20, 2016 at 5:50 PM, Dmitry Sutyagin 
wrote:

> Correction:
>
> group_dns = [u'CN=GroupX,OU=Groups,OU=SomeOU,DC=zzz']
> ra.user_dn.upper() = 'CN=GROUPX,OU=GROUPS,OU=SOMEOU,DC=ZZZ'
>
> So this could work if only:
> - string in group_dns was str, not unicode
> - text was uppercase
>
> Now the question is - should it be so?
>
> On Wed, Apr 20, 2016 at 5:41 PM, Dmitry Sutyagin 
> wrote:
>
>> Hi everybody,
>>
>> I am observing the following issue:
>>
>> LDAP backend is enabled for identity and assignment, domain specific
>> configs disabled.
>> LDAP section configured - users, groups, projects and roles are mapped.
>> I am able to use identity v3 api to list users, groups, to verify that a
>> user is in a group, and also to view role assignments - everythings looks
>> correct so far.
>> I am able to create a role for user in LDAP and if I put a user directly
>> into a role, everything works.
>> But when I put a group (which contains that user) into a role - the user
>> get's 401.
>>
>> I have found a spot in the code which causes the issue:
>>
>>
>> https://github.com/openstack/keystone/blob/stable/liberty/keystone/assignment/backends/ldap.py#L67
>>
>> This check returns False, here is why:
>> ===
>> group_dns = ['cn=GroupX,ou=Groups,ou=YYY,dc=...']
>> role_assignment.user_dn = 'cn=UserX,ou=Users,ou=YYY,dc=...'
>> ===
>>
>> Therefore the check:
>> 
>> if role_assignment.user_dn.upper() in group_dns
>> 
>> Will return false. I do not understand how this should work - why should
>> user_dn match group_dn?
>>
>> --
>> Yours sincerely,
>> Dmitry Sutyagin
>> OpenStack Escalations Engineer
>> Mirantis, Inc.
>>
>
>
>
> --
> Yours sincerely,
> Dmitry Sutyagin
> OpenStack Escalations Engineer
> Mirantis, Inc.
>



-- 
Yours sincerely,
Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty - problem with assignment LDAP backend - Groups

2016-04-20 Thread Dmitry Sutyagin
Correction:

group_dns = [u'CN=GroupX,OU=Groups,OU=SomeOU,DC=zzz']
ra.user_dn.upper() = 'CN=GROUPX,OU=GROUPS,OU=SOMEOU,DC=ZZZ'

So this could work if only:
- string in group_dns was str, not unicode
- text was uppercase

Now the question is - should it be so?

On Wed, Apr 20, 2016 at 5:41 PM, Dmitry Sutyagin 
wrote:

> Hi everybody,
>
> I am observing the following issue:
>
> LDAP backend is enabled for identity and assignment, domain specific
> configs disabled.
> LDAP section configured - users, groups, projects and roles are mapped.
> I am able to use identity v3 api to list users, groups, to verify that a
> user is in a group, and also to view role assignments - everythings looks
> correct so far.
> I am able to create a role for user in LDAP and if I put a user directly
> into a role, everything works.
> But when I put a group (which contains that user) into a role - the user
> get's 401.
>
> I have found a spot in the code which causes the issue:
>
>
> https://github.com/openstack/keystone/blob/stable/liberty/keystone/assignment/backends/ldap.py#L67
>
> This check returns False, here is why:
> ===
> group_dns = ['cn=GroupX,ou=Groups,ou=YYY,dc=...']
> role_assignment.user_dn = 'cn=UserX,ou=Users,ou=YYY,dc=...'
> ===
>
> Therefore the check:
> 
> if role_assignment.user_dn.upper() in group_dns
> 
> Will return false. I do not understand how this should work - why should
> user_dn match group_dn?
>
> --
> Yours sincerely,
> Dmitry Sutyagin
> OpenStack Escalations Engineer
> Mirantis, Inc.
>



-- 
Yours sincerely,
Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Liberty - problem with assignment LDAP backend - Groups

2016-04-20 Thread Dmitry Sutyagin
Hi everybody,

I am observing the following issue:

LDAP backend is enabled for identity and assignment, domain specific
configs disabled.
LDAP section configured - users, groups, projects and roles are mapped.
I am able to use identity v3 api to list users, groups, to verify that a
user is in a group, and also to view role assignments - everythings looks
correct so far.
I am able to create a role for user in LDAP and if I put a user directly
into a role, everything works.
But when I put a group (which contains that user) into a role - the user
get's 401.

I have found a spot in the code which causes the issue:

https://github.com/openstack/keystone/blob/stable/liberty/keystone/assignment/backends/ldap.py#L67

This check returns False, here is why:
===
group_dns = ['cn=GroupX,ou=Groups,ou=YYY,dc=...']
role_assignment.user_dn = 'cn=UserX,ou=Users,ou=YYY,dc=...'
===

Therefore the check:

if role_assignment.user_dn.upper() in group_dns

Will return false. I do not understand how this should work - why should
user_dn match group_dn?

-- 
Yours sincerely,
Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] http response code

2016-04-20 Thread joehuang
Yes, Tricircle follows HTTP guideline from OpenStack API working group:

https://specs.openstack.org/openstack/api-wg/guidelines/http.html

If something is not followed the guideline, please report a bug to track the 
issue.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
Sent: Wednesday, April 20, 2016 6:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: lbq09008228
Subject: Re: [openstack-dev] [tricircle] http response code

This might be answer for your question.

https://github.com/openstack/tricircle/blob/master/tricircle/api/controllers/pod.py
Cheers,
S

On Wed, Apr 20, 2016 at 6:37 PM, 李戈 
> wrote:
Hi
I read api source code recently and have a question. Do we uniform the 
"http response code"?


such as, 404 means "Not Found".






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on 
OpenSource
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.i18n 3.6.0 release (newton)

2016-04-20 Thread no-reply
We are tickled pink to announce the release of:

oslo.i18n 3.6.0: Oslo i18n library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.i18n

With package available at:

https://pypi.python.org/pypi/oslo.i18n

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n

For more details, please see below.

Changes in oslo.i18n 3.4.0..3.6.0
-

fb8b773 Updated from global requirements
21e78da Updated from global requirements
1dc7cb7 Better isolate tests and fixtures from environment
9f0a3c3 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_i18n/_message.py | 5 -
oslo_i18n/fixture.py  | 8 
requirements.txt  | 2 +-
test-requirements.txt | 2 +-
4 files changed, 10 insertions(+), 7 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index d91c357..be4eb38 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-Babel>=1.3 # BSD
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index 7da5b32..fa71f43 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ coverage>=3.6 # Apache-2.0
-oslo.config>=3.4.0 # Apache-2.0
+oslo.config>=3.9.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.log 3.5.0 release (newton)

2016-04-20 Thread no-reply
We are tickled pink to announce the release of:

oslo.log 3.5.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.4.0..3.5.0


273ef53 Remove direct dependency on babel

Diffstat (except docs and test files)
-

requirements.txt | 1 -
1 file changed, 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index de704ae..59ec3f1 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +5,0 @@ pbr>=1.6 # Apache-2.0
-Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][oslo] oslo.db 1.7.5 release (kilo)

2016-04-20 Thread no-reply
We are psyched to announce the release of:

oslo.db 1.7.5: oslo.db library

This release is part of the kilo stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

Changes in oslo.db 1.7.4..1.7.5
---

35a0e64 Manual g-r sync
5501224 Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2ccf36f..78e6a9b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -18,2 +18,2 @@ six>=1.9.0
-testtools>=0.9.36,!=1.2.0,<2.0.0
-testresources>=0.2.4
+testtools!=1.2.0,<2.0.0,>=0.9.36
+testresources<2.0.0,>=0.2.4



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-20 Thread Morgan Fainberg
On Wed, Apr 13, 2016 at 6:07 AM, David Stanek  wrote:

> On Wed, Apr 13, 2016 at 3:26 AM koshiya maho 
> wrote:
>
>>
>> My request to all keystone cores to give their suggestions about the same.
>>
>>
> I'll test this a little and see if I can see how it breaks.
>
> Overall I'm not really a fan of this design. It's just a hack to add
> attributes where they don't belong. Long term I think this will be hard to
> maintain.
>
>
>
If we want to return a response object we should return a response object.
Returning a magic list with attributes (or a dict with attributes, etc)
feels very, very wrong.

I'm not going to block this design, but I wish we had something a bit
better.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] api-ref content verification phase doc push

2016-04-20 Thread Sean Dague
This morning we finally cleaned up the last warnings in api-ref, so now
we can enforce errors on warnings. Woot! That went much faster than I
anticipated, and puts us at a really good place for summit.

The next phase is the content verification phase. This patch is merging
a set of comments at the top of every file that they need 4 types of
verification - https://review.openstack.org/#/c/308569/ as described
here: https://wiki.openstack.org/wiki/NovaAPIRef

The expectation is that every file is going to see 4 (or more in
complicated cases) patches. I tried to break these up into pretty
concrete things to verify, so that they will be easy for content writers
and reviewers. They should be done in the 1, 2, 3, 4 order for each
file, but there is no need to a whole file to the end. You can do a
bunch of 1s on files, then some 2s, etc.

The idea is that in the patch in which you feel one of the verification
phases is complete, also delete the needs:x_verification for that phase.
That will let folks git grep for content that's not done yet.

Also, check open patches before taking a unit to try to avoid duplicate
effort with folks. This process will be a little slower, because it will
be good to cross reference the content with the code to make sure it's
right. For reviewers that are +1ing these patches, please just leave a
comment that you've done that cross check so we know who is looking deep
on these.

We'll see how much progress we can make over the next couple of weeks,
then will try to finish it all up with a virtual doc sprint a couple
weeks after summit.

Thanks again to everyone that's been helping we've already merged a ton
of good fixes here -
https://review.openstack.org/#/q/status:merged+topic:bp/api-ref-in-rst

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Fox, Kevin M
Its new enough that people haven't thought to ask until recently. The recent 
interest is starting in the topic due to Magnum getting mature enough folks are 
starting to deploy it and finding out it doesn't solve a bunch of issues they 
had thought it would. Its pretty natural. Don't just blow it off because you 
haven't been asked till now.

I know we're going to try and pilot it here soon, and the only reason I know 
some of these things that we will need coming up aren't there now is I've paid 
close attention to the dev mailing list. Others don't pay so close attention.

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, April 20, 2016 4:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: April-20-16 6:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
> Magnum doesn¹t have to preclude tight integration for single COEs you
> speak of.  The heavy lifting of tight integration of the COE in to
> OpenStack (so that it performs optimally with the infra) can be modular
> (where the work is performed by plug-in models to Magnum, not performed
> by Magnum itself. The tight integration can be done by leveraging
> existing technologies (Heat and/or choose your DevOps tool of choice:
> Chef/Ansible/etc). This allows interested community members to focus on
> tight integration of whatever COE they want, focusing specifically on

I agree that tight integration can be achieved by a plugin, but I think the key 
question is who will do the work. If tight integration needs to be done, I 
wonder why it is not part of the Magnum efforts. From my point of view, pushing 
the work out doesn't seem to address the original pain, which is some users 
don't want to explore the complexities of individual COEs.

> the COE integration part, contributing that integration focus to Magnum
> via plug-ins, without having to actually know much about Magnum, but
> instead
> contribute to the COE plug-in using DevOps tools of choice.   Pegging
> Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
> etc. project for every COE of interest, all with different ways of
> kicking off COE management.  Magnum could unify that experience for
> users and operators, without picking a winner in the COE space < this
> is just like Nova not picking a winner between VM flavors or OS types.
> It just facilitates instantiation and management of thins.  Opinion
> here:  The value of Magnum is in being a light-weight/thin API,
> providing modular choice and plug-ability to COE provisioning and
> management, thereby providing operators and users choice of COE
> instantiation and management (via the bay concept), where each COE can
> be as tightly or loosely integrated as desired by different plug-ins
> contributed to perform the COE setup and configurations.  So, Magnum
> could have two or more swarm plug-in options contributed to the
> community.. One overlays generic swarm on VMs.
> The other swarm plug-in could instantiate swarm tightly integrated to
> neutron, keystone, etc on to bare metal.  Magnum just facilities a
> plug-in model with thin API to offer choice of CEO instantiation and
> management.
> The plug-in does the heavy lifting using whatever methods desired by
> the curator.
>
> That¹s my $0.2.
>
> -Keith
>
> On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:
>
> >Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making native
> >>> container APIs available. We should not wrap APIs with leaky
> >>> abstractions. The lowest common denominator of all COEs is an
> >>> remarkably low value API that adds considerable complexity to
> Magnum
> >>> that will not strategically advance OpenStack. If we instead focus
> >>> our effort on making the COEs work better on OpenStack, that would
> >>> be a winning strategy. Support and compliment our various COE
> ecosystems.
> >
> >So I'm all for avoiding 'wrap APIs with leaky abstractions' and
> 'making
> >COEs work better on OpenStack' but I do dislike the part about COEs
> >(plural) because it is once again the old non-opinionated problem that
> >we (as a community) suffer from.
> >
> >Just my 2 cents, but I'd almost rather we pick one COE and integrate
> >that deeply/tightly with openstack, and yes if this causes some part
> of
> >the openstack community to be annoyed, meh, to bad. Sadly I have a
> >feeling we are hurting ourselves by continuing to try to be everything
> >and not picking anything (it's a general thing we, as a group, seem to
> >be good at, lol). I mean I get the reason to just support all the
> >things, but it feels like 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Fox, Kevin M
I think Magnum much is much closer to Sahara or Trove in its workings. Heat's 
orchestration. Thats what the COE does.

Sahara is and has plugins to deploy various Hadoopy like clusters, get them 
assembled into something useful, and has a few abstraction api's like "submit a 
job to the deployed hadoop cluster queue."

Trove is and has plugins to deploy various Databasey things. Both SQL and 
noSQL. It has a few abstractions over all the things for cluster maintenance, 
backups, db and user creation.

If all Magnum did was deploy a COE, you could potentially just use Heat to do 
that.

What I want to do is have Heat hooked in closely enough through Magnum that 
Heat templates can deploy COE templates through Magnum Resources. Heat tried to 
do that with a docker resource driver directly, and its messy, racy, and 
doesn't work very well. Magnum's in a better position to establish a 
communication channel between Heat and the COE due to its back channel into the 
vms, bypassing Neutron network stuff.

Thanks,
Kevin

From: Georgy Okrokvertskhov [gokrokvertsk...@mirantis.com]
Sent: Wednesday, April 20, 2016 3:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If Magnum will be focused on installation and management for COE it will be 
unclear how much it is different from Heat and other generic orchestrations.  
It looks like most of the current Magnum functionality is provided by Heat. 
Magnum focus on deployment will potentially lead to another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for 
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
> wrote:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow" 
> wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
>So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>COEs work better on OpenStack' but I do dislike the part about COEs
>(plural) because it is once again the old non-opinionated problem that
>we (as a community) suffer from.
>
>Just my 2 cents, but I'd almost rather we pick one COE and integrate
>that deeply/tightly with openstack, and yes if this causes some part of
>the openstack community to be annoyed, meh, to bad. Sadly 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Hongbin Lu


> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: April-20-16 6:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> Magnum doesn¹t have to preclude tight integration for single COEs you
> speak of.  The heavy lifting of tight integration of the COE in to
> OpenStack (so that it performs optimally with the infra) can be modular
> (where the work is performed by plug-in models to Magnum, not performed
> by Magnum itself. The tight integration can be done by leveraging
> existing technologies (Heat and/or choose your DevOps tool of choice:
> Chef/Ansible/etc). This allows interested community members to focus on
> tight integration of whatever COE they want, focusing specifically on

I agree that tight integration can be achieved by a plugin, but I think the key 
question is who will do the work. If tight integration needs to be done, I 
wonder why it is not part of the Magnum efforts. From my point of view, pushing 
the work out doesn't seem to address the original pain, which is some users 
don't want to explore the complexities of individual COEs.

> the COE integration part, contributing that integration focus to Magnum
> via plug-ins, without having to actually know much about Magnum, but
> instead
> contribute to the COE plug-in using DevOps tools of choice.   Pegging
> Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
> etc. project for every COE of interest, all with different ways of
> kicking off COE management.  Magnum could unify that experience for
> users and operators, without picking a winner in the COE space < this
> is just like Nova not picking a winner between VM flavors or OS types.
> It just facilitates instantiation and management of thins.  Opinion
> here:  The value of Magnum is in being a light-weight/thin API,
> providing modular choice and plug-ability to COE provisioning and
> management, thereby providing operators and users choice of COE
> instantiation and management (via the bay concept), where each COE can
> be as tightly or loosely integrated as desired by different plug-ins
> contributed to perform the COE setup and configurations.  So, Magnum
> could have two or more swarm plug-in options contributed to the
> community.. One overlays generic swarm on VMs.
> The other swarm plug-in could instantiate swarm tightly integrated to
> neutron, keystone, etc on to bare metal.  Magnum just facilities a
> plug-in model with thin API to offer choice of CEO instantiation and
> management.
> The plug-in does the heavy lifting using whatever methods desired by
> the curator.
> 
> That¹s my $0.2.
> 
> -Keith
> 
> On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:
> 
> >Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making native
> >>> container APIs available. We should not wrap APIs with leaky
> >>> abstractions. The lowest common denominator of all COEs is an
> >>> remarkably low value API that adds considerable complexity to
> Magnum
> >>> that will not strategically advance OpenStack. If we instead focus
> >>> our effort on making the COEs work better on OpenStack, that would
> >>> be a winning strategy. Support and compliment our various COE
> ecosystems.
> >
> >So I'm all for avoiding 'wrap APIs with leaky abstractions' and
> 'making
> >COEs work better on OpenStack' but I do dislike the part about COEs
> >(plural) because it is once again the old non-opinionated problem that
> >we (as a community) suffer from.
> >
> >Just my 2 cents, but I'd almost rather we pick one COE and integrate
> >that deeply/tightly with openstack, and yes if this causes some part
> of
> >the openstack community to be annoyed, meh, to bad. Sadly I have a
> >feeling we are hurting ourselves by continuing to try to be everything
> >and not picking anything (it's a general thing we, as a group, seem to
> >be good at, lol). I mean I get the reason to just support all the
> >things, but it feels like we as a community could just pick something,
> >work together on figuring out how to pick one, using all these bright
> >leaders we have to help make that possible (and yes this might piss
> >some people off, to bad). Then work toward making that something great
> >and move on...
> >
> >>
> >> I'm with Adrian on that one. I've attended a lot of
> >> container-oriented conferences over the past year and my main
> >> takeaway is that this new crowd of potential users is not interested
> >> (at all) in an OpenStack-specific lowest common denominator API for
> >> COEs. They want to take advantage of the cool features in Kubernetes
> >> API or the versatility of Mesos. They want to avoid caring about the
> >> infrastructure provider bit (and not deploy Mesos or Kubernetes
> themselves).
> >>
> >> Let's focus on the infrastructure provider bit -- that is what we do
> 

Re: [openstack-dev] summit tools

2016-04-20 Thread Tony Breeds
On Wed, Apr 20, 2016 at 04:13:38PM +, Neil Jerram wrote:
> A couple of questions about our Austin-related planning tools...
> 
> - Can one's calendar at 
> https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25 
> be exported as .ics, or otherwise integrated into a wider calendaring 
> system?
> 
> - Is the app working for anyone else?  All I get is 'Oops - there was an 
> error performing this operation' and 'There was a problem loading summit 
> information ...'  My phone is a Blackberry, which means I'm asking for 
> trouble, but OTOH it has an Android runtime and does successfully run 
> several other Android apps.

Small data point the app works fine for me on Android 6.0.1

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Georgy Okrokvertskhov
If Magnum will be focused on installation and management for COE it will be
unclear how much it is different from Heat and other generic
orchestrations.  It looks like most of the current Magnum functionality is
provided by Heat. Magnum focus on deployment will potentially lead to
another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
wrote:

> Magnum doesn¹t have to preclude tight integration for single COEs you
> speak of.  The heavy lifting of tight integration of the COE in to
> OpenStack (so that it performs optimally with the infra) can be modular
> (where the work is performed by plug-in models to Magnum, not performed by
> Magnum itself. The tight integration can be done by leveraging existing
> technologies (Heat and/or choose your DevOps tool of choice:
> Chef/Ansible/etc). This allows interested community members to focus on
> tight integration of whatever COE they want, focusing specifically on the
> COE integration part, contributing that integration focus to Magnum via
> plug-ins, without having to actually know much about Magnum, but instead
> contribute to the COE plug-in using DevOps tools of choice.   Pegging
> Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
> etc. project for every COE of interest, all with different ways of kicking
> off COE management.  Magnum could unify that experience for users and
> operators, without picking a winner in the COE space ‹ this is just like
> Nova not picking a winner between VM flavors or OS types.  It just
> facilitates instantiation and management of thins.  Opinion here:  The
> value of Magnum is in being a light-weight/thin API, providing modular
> choice and plug-ability to COE provisioning and management, thereby
> providing operators and users choice of COE instantiation and management
> (via the bay concept), where each COE can be as tightly or loosely
> integrated as desired by different plug-ins contributed to perform the COE
> setup and configurations.  So, Magnum could have two or more swarm plug-in
> options contributed to the community.. One overlays generic swarm on VMs.
> The other swarm plug-in could instantiate swarm tightly integrated to
> neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
> model with thin API to offer choice of CEO instantiation and management.
> The plug-in does the heavy lifting using whatever methods desired by the
> curator.
>
> That¹s my $0.2.
>
> -Keith
>
> On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:
>
> >Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making native container
> >>> APIs available. We should not wrap APIs with leaky abstractions. The
> >>> lowest common denominator of all COEs is an remarkably low value API
> >>> that adds considerable complexity to Magnum that will not
> >>> strategically advance OpenStack. If we instead focus our effort on
> >>> making the COEs work better on OpenStack, that would be a winning
> >>> strategy. Support and compliment our various COE ecosystems.
> >
> >So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
> >COEs work better on OpenStack' but I do dislike the part about COEs
> >(plural) because it is once again the old non-opinionated problem that
> >we (as a community) suffer from.
> >
> >Just my 2 cents, but I'd almost rather we pick one COE and integrate
> >that deeply/tightly with openstack, and yes if this causes some part of
> >the openstack community to be annoyed, meh, to bad. Sadly I have a
> >feeling we are hurting ourselves by continuing to try to be everything
> >and not picking anything (it's a general thing we, as a group, seem to
> >be good at, lol). I mean I get the reason to just support all the
> >things, but it feels like we as a community could just pick something,
> >work together on figuring out how to pick one, using all these bright
> >leaders we have to help make that possible (and yes this might piss some
> >people off, to bad). Then work toward making that something great and
> >move on...
> >
> >>
> >> I'm with Adrian on that one. I've attended a lot of container-oriented
> >> conferences over the past year and my main takeaway is that this new
> >> crowd of potential users is not interested (at all) in an
> >> OpenStack-specific lowest common denominator API for COEs. They want to
> >> take advantage of the cool features in Kubernetes API or the versatility
> >> of Mesos. They want to avoid caring about the infrastructure provider
> >> bit (and not deploy Mesos or Kubernetes themselves).
> >>
> >> Let's focus on the infrastructure provider bit -- that is what we do and
> >> what the ecosystem wants us to provide.
> >>
> >
> >__
> >OpenStack Development 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Fox, Kevin M
+1 to plugins. it has suited nova/trove/sahara/etc well.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, April 20, 2016 3:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
>So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>COEs work better on OpenStack' but I do dislike the part about COEs
>(plural) because it is once again the old non-opinionated problem that
>we (as a community) suffer from.
>
>Just my 2 cents, but I'd almost rather we pick one COE and integrate
>that deeply/tightly with openstack, and yes if this causes some part of
>the openstack community to be annoyed, meh, to bad. Sadly I have a
>feeling we are hurting ourselves by continuing to try to be everything
>and not picking anything (it's a general thing we, as a group, seem to
>be good at, lol). I mean I get the reason to just support all the
>things, but it feels like we as a community could just pick something,
>work together on figuring out how to pick one, using all these bright
>leaders we have to help make that possible (and yes this might piss some
>people off, to bad). Then work toward making that something great and
>move on...
>
>>
>> I'm with Adrian on that one. I've attended a lot of container-oriented
>> conferences over the past year and my main takeaway is that this new
>> crowd of potential users is not interested (at all) in an
>> OpenStack-specific lowest common denominator API for COEs. They want to
>> take advantage of the cool features in Kubernetes API or the versatility
>> of Mesos. They want to avoid caring about the infrastructure provider
>> bit (and not deploy Mesos or Kubernetes themselves).
>>
>> Let's focus on the infrastructure provider bit -- that is what we do and
>> what the ecosystem wants us to provide.
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Brant Knudson
On Wed, Apr 20, 2016 at 3:10 PM, Boden Russell  wrote:

> Today there are a number of places in nova, neutron and perhaps
> elsewhere that employ backoff + timeout strategies (see [1] - [4]).
> While we are working towards a unified approach in neutron for RPC [5],
> it appears such logic could benefit the greater community as a reusable
> oslo implementation.
>
> IMHO such an oslo implementation could:
> - Enable backoff/timeout irrespective of client/transport. This would
> allow the utils to be used with various clients (HTTP, AMQP RPC, etc.).
> - Support namespacing as inspired by the existing neutron patch [5].
> - In the future, perhaps, allow multiple (pluggable) backoff strategies
> to be used (e.g. configurable backoff).
>
> Anyone adverse to me crafting an initial oslo patch to kick-off the
> details on this one?
>
>
> Thanks
>
>
> [1]
> https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L162
> [2]
>
> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/scheduler/utils.py#L356
> [3]
>
> https://github.com/openstack/neutron/blob/dd4f1253c951d78a5b497680dfb31317ba469a58/neutron/agent/l3/agent.py#L224
> [4]
>
> https://github.com/openstack/neutron/blob/42c6f05f2904cd4c01bcd3f79b1966489f5ad3c1/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L189
> [5] https://review.openstack.org/#/c/280595/
>
>
oslo.utils has a decorator that will retry after a delay -
http://docs.openstack.org/developer/oslo.utils/api/excutils.html#oslo_utils.excutils.forever_retry_uncaught_exceptions
, which doesn't do backoff would be easy enough to add. Alternatively, if
an existing library works better, maybe the oslo utility should be
deprecated.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-20 Thread Matt Riedemann

On 4/20/2016 8:25 AM, Miguel Angel Ajo Pelayo wrote:

Inline update.

On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
 wrote:

On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes  wrote:

On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:

[...]

Yes, Nova's conductor gathers information about the requested networks
*before* asking the scheduler where to place hosts:

https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362


 That would require identifying that the port has a "qos_policy_id"
attached to it, and then, asking neutron for the specific QoS policy
  [3], then look out for a minimum bandwidth rule (still to be defined),
and extract the required bandwidth from it.



Yep, exactly correct.


That moves, again some of the responsibility to examine and
understand external resources to nova.



Yep, it does. The alternative is more retries for placement decisions
because accurate decisions cannot be made until the compute node is already
selected and the claim happens on the compute node.


 Could it make sense to make that part pluggable via stevedore?, so
we would provide something that takes the "resource id" (for a port in
this case) and returns the requirements translated to resource classes
(NIC_BW_KB in this case).



Not sure Stevedore makes sense in this context. Really, we want *less*
extensibility and *more* consistency. So, I would envision rather a system
where Nova would call to Neutron before scheduling when it has received a
port or network ID in the boot request and ask Neutron whether the port or
network has any resource constraints on it. Neutron would return a
standardized response containing each resource class and the amount
requested in a dictionary (or better yet, an os_vif.objects.* object,
serialized). Something like:

{
  'resources': {
'': {
  'NIC_BW_KB': 2048,
  'IPV4_ADDRESS': 1
}
  }
}



Oh, true, that's a great idea, having some API that translates a
neutron resource, to scheduling constraints. The external call will be
still required, but the coupling issue is removed.





I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
this, and we believe the synthesis of resource usage / scheduling
constraints from neutron makes sense.

We should probably look into providing those details in a read only
dictionary during port creation/update/show in general, in that way,
we would not be adding an extra API call to neutron from the nova
scheduler to figure out any of those details. That extra optimization
is something we may need to discuss with the neutron community.


Note that I think the only time Nova gets details about ports in the API 
during a server create request is when doing the network request 
validation, and that's only if there is a fixed IP address or specific 
port(s) in the request, otherwise Nova just gets the networks. [1]







In the case of the NIC_BW_KB resource class, Nova's scheduler would look for
compute nodes that had a NIC with that amount of bandwidth still available.
In the case of the IPV4_ADDRESS resource class, Nova's scheduler would use
the generic-resource-pools interface to find a resource pool of IPV4_ADDRESS
resources (i.e. a Neutron routed network or subnet allocation pool) that has
available IP space for the request.



Not sure about the IPV4_ADDRESS part because I still didn't look on
how they resolve routed networks with this new framework, but for
other constraints makes perfect sense to me.


Best,
-jay



Best regards,
Miguel Ángel Ajo


[1]

http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
[2] https://bugs.launchpad.net/neutron/+bug/1560963
[3]
http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[1] 
https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Keith Bray
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
>So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>COEs work better on OpenStack' but I do dislike the part about COEs
>(plural) because it is once again the old non-opinionated problem that
>we (as a community) suffer from.
>
>Just my 2 cents, but I'd almost rather we pick one COE and integrate
>that deeply/tightly with openstack, and yes if this causes some part of
>the openstack community to be annoyed, meh, to bad. Sadly I have a
>feeling we are hurting ourselves by continuing to try to be everything
>and not picking anything (it's a general thing we, as a group, seem to
>be good at, lol). I mean I get the reason to just support all the
>things, but it feels like we as a community could just pick something,
>work together on figuring out how to pick one, using all these bright
>leaders we have to help make that possible (and yes this might piss some
>people off, to bad). Then work toward making that something great and
>move on...
>
>>
>> I'm with Adrian on that one. I've attended a lot of container-oriented
>> conferences over the past year and my main takeaway is that this new
>> crowd of potential users is not interested (at all) in an
>> OpenStack-specific lowest common denominator API for COEs. They want to
>> take advantage of the cool features in Kubernetes API or the versatility
>> of Mesos. They want to avoid caring about the infrastructure provider
>> bit (and not deploy Mesos or Kubernetes themselves).
>>
>> Let's focus on the infrastructure provider bit -- that is what we do and
>> what the ecosystem wants us to provide.
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-20 Thread Brant Knudson
On Wed, Apr 20, 2016 at 6:31 AM, Duncan Thomas 
wrote:

> On 20 April 2016 at 08:08, koshiya maho 
> wrote:
>
>
>> This design was discussed, reviewed and approved in cross-projects [1] and
>> already implemented in nova, cinder and neutron.
>> At this point if we change the implementation then it will not be
>> consistent across core OpenStack projects.
>> For maintenance of the whole of OpenStack, I think that the present
>> method is best.
>> Please suggest.
>>
>
>
I haven't been asking for a complete redesign. I just want this to be
opt-in to minimize the chance of an impact on existing applications. We can
eventually deprecate the old behavior.

- Brant



> The fact that a cross-project spec is approved doesn't mean that it will
> end up being practical. If the cinder-client implementation had been found
> to break any none-trivial users then I wouldn't have hesitated.
>
> Cross project specs are not getting massive amounts of detailed attention
> from project teams, end even they were it is not possible to foresee all
> subtle problems at review time - they should be taken as guidance not
> gospel and expect to be reworked if it proves necessary.
>
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Joshua Harlow

Thierry Carrez wrote:

Adrian Otto wrote:

This pursuit is a trap. Magnum should focus on making native container
APIs available. We should not wrap APIs with leaky abstractions. The
lowest common denominator of all COEs is an remarkably low value API
that adds considerable complexity to Magnum that will not
strategically advance OpenStack. If we instead focus our effort on
making the COEs work better on OpenStack, that would be a winning
strategy. Support and compliment our various COE ecosystems.


So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
COEs work better on OpenStack' but I do dislike the part about COEs 
(plural) because it is once again the old non-opinionated problem that 
we (as a community) suffer from.


Just my 2 cents, but I'd almost rather we pick one COE and integrate 
that deeply/tightly with openstack, and yes if this causes some part of 
the openstack community to be annoyed, meh, to bad. Sadly I have a 
feeling we are hurting ourselves by continuing to try to be everything 
and not picking anything (it's a general thing we, as a group, seem to 
be good at, lol). I mean I get the reason to just support all the 
things, but it feels like we as a community could just pick something, 
work together on figuring out how to pick one, using all these bright 
leaders we have to help make that possible (and yes this might piss some 
people off, to bad). Then work toward making that something great and 
move on...




I'm with Adrian on that one. I've attended a lot of container-oriented
conferences over the past year and my main takeaway is that this new
crowd of potential users is not interested (at all) in an
OpenStack-specific lowest common denominator API for COEs. They want to
take advantage of the cool features in Kubernetes API or the versatility
of Mesos. They want to avoid caring about the infrastructure provider
bit (and not deploy Mesos or Kubernetes themselves).

Let's focus on the infrastructure provider bit -- that is what we do and
what the ecosystem wants us to provide.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Seek advices for a licence issue

2016-04-20 Thread Hongbin Lu
Hi Mark,

I have went though the announcement in details, From my point of view, it seems 
to resolve the license issue that was blocking us in before. I have included 
the Magnum team in ML to see if our team members have any comment.

Thanks for the support from foundation.

Best regards,
Hongbin

From: Mark Collier [mailto:m...@openstack.org]
Sent: April-19-16 12:36 PM
To: Hongbin Lu
Cc: foundat...@lists.openstack.org; Guang Ya GY Liu
Subject: Re: [OpenStack Foundation] [magnum] Seek advices for a licence issue

Hopefully today’s news that Mesosphere is open major sourcing components of 
DCOS under an Apache 2.0 license will make things easier:

https://mesosphere.com/blog/2016/04/19/open-source-dcos/

I’ll be interested to hear your take after you have time to look at it in more 
detail, Hongbin.

Mark



On Apr 9, 2016, at 10:02 AM, Hongbin Lu 
> wrote:

Hi all,

A brief introduction to myself. I am the Magnum Project Team Lead (PTL). Magnum 
is the OpenStack container service. I wrote this email because the Magnum team 
is seeking clarification for a licence issue for shipping third-party software 
(DCOS [1] in particular) and I was advised to consult OpenStack Board of 
Directors in this regards.

Before getting into the question, I think it is better to provide some 
backgroup information. A feature provided by Magnum is to provision container 
management tool on top of a set of Nova instances. One of the container 
management tool Magnum supports is Apache Mesos [2]. Generally speaking, Magnum 
ships Mesos by providing a custom cloud image with the necessary packages 
pre-installed. So far, all the shipped components are open source with 
appropriate license, so we are good so far.

Recently, one of our contributors suggested to extend the Mesos support to DCOS 
[3]. The Magnum team is unclear if there is a license issue for shipping DCOS, 
which looks like a close-source product but has community version in Amazon Web 
Services [4]. I want to know what are the appropriate actions Magnum team 
should take in this pursuit, or we should stop pursuing this direction further? 
Advices are greatly appreciated. Please let us know if we need to provide 
further information. Thanks.

[1] https://docs.mesosphere.com/
[2] http://mesos.apache.org/
[3] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
[4] 
https://docs.mesosphere.com/administration/installing/installing-community-edition/

Best regards,
Hongbin



___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Joshua Harlow

Feel free to take the following (if its similar to what u are thinking)

https://github.com/openstack/anvil/blob/master/anvil/utils.py#L90

IMHO though if its a decorator, the retrying library can already perform 
this:


https://pypi.python.org/pypi/retrying

And a couple of the oslo-cores (jd, me) have contributed to that library 
and can explain how to use it...


Overall though, let's see an initial patch and I guess we can see where 
this goes and how much something like it may already exist elsewhere...


-Josh

Amrith Kumar wrote:

Boden,

Are you thinking of implementing something which would perform exponentially 
backed off calls to some arbitrary function till that method returns with 
something other than a timeout?

I think that would be very versatile, and useful in a wide variety of places.

Thanks,

-amrith


-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: Wednesday, April 20, 2016 4:45 PM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [neutron][nova][oslo] Common backoff&
timeout utils

Sounds good to me Boden.

-- Dims

On Wed, Apr 20, 2016 at 4:10 PM, Boden Russell  wrote:

Today there are a number of places in nova, neutron and perhaps
elsewhere that employ backoff + timeout strategies (see [1] - [4]).
While we are working towards a unified approach in neutron for RPC
[5], it appears such logic could benefit the greater community as a
reusable oslo implementation.

IMHO such an oslo implementation could:
- Enable backoff/timeout irrespective of client/transport. This would
allow the utils to be used with various clients (HTTP, AMQP RPC, etc.).
- Support namespacing as inspired by the existing neutron patch [5].
- In the future, perhaps, allow multiple (pluggable) backoff
strategies to be used (e.g. configurable backoff).

Anyone adverse to me crafting an initial oslo patch to kick-off the
details on this one?


Thanks


[1]
https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L1
62
[2]
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c9
7dc1d065b/nova/scheduler/utils.py#L356
[3]
https://github.com/openstack/neutron/blob/dd4f1253c951d78a5b497680dfb3
1317ba469a58/neutron/agent/l3/agent.py#L224
[4]
https://github.com/openstack/neutron/blob/42c6f05f2904cd4c01bcd3f79b19
66489f5ad3c1/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neu
tron_agent.py#L189 [5] https://review.openstack.org/#/c/280595/

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-04-20 22:16:10 +0100:
> 
> Will the already existing retrying[1] do the job or is it missing
> features (the namespacing thing seems like it could be an issue)
> or perhaps too generic?
> 
> [1] https://pypi.python.org/pypi/retrying
> 

Yes, please, let's try to make that work and contribute upstream if we
need minor modifications, before we create something new.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Chris Dent


Will the already existing retrying[1] do the job or is it missing
features (the namespacing thing seems like it could be an issue)
or perhaps too generic?

[1] https://pypi.python.org/pypi/retrying

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Amrith Kumar
Boden,

Are you thinking of implementing something which would perform exponentially 
backed off calls to some arbitrary function till that method returns with 
something other than a timeout?

I think that would be very versatile, and useful in a wide variety of places.

Thanks,

-amrith

> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: Wednesday, April 20, 2016 4:45 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [neutron][nova][oslo] Common backoff &
> timeout utils
> 
> Sounds good to me Boden.
> 
> -- Dims
> 
> On Wed, Apr 20, 2016 at 4:10 PM, Boden Russell  wrote:
> > Today there are a number of places in nova, neutron and perhaps
> > elsewhere that employ backoff + timeout strategies (see [1] - [4]).
> > While we are working towards a unified approach in neutron for RPC
> > [5], it appears such logic could benefit the greater community as a
> > reusable oslo implementation.
> >
> > IMHO such an oslo implementation could:
> > - Enable backoff/timeout irrespective of client/transport. This would
> > allow the utils to be used with various clients (HTTP, AMQP RPC, etc.).
> > - Support namespacing as inspired by the existing neutron patch [5].
> > - In the future, perhaps, allow multiple (pluggable) backoff
> > strategies to be used (e.g. configurable backoff).
> >
> > Anyone adverse to me crafting an initial oslo patch to kick-off the
> > details on this one?
> >
> >
> > Thanks
> >
> >
> > [1]
> > https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L1
> > 62
> > [2]
> > https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c9
> > 7dc1d065b/nova/scheduler/utils.py#L356
> > [3]
> > https://github.com/openstack/neutron/blob/dd4f1253c951d78a5b497680dfb3
> > 1317ba469a58/neutron/agent/l3/agent.py#L224
> > [4]
> > https://github.com/openstack/neutron/blob/42c6f05f2904cd4c01bcd3f79b19
> > 66489f5ad3c1/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neu
> > tron_agent.py#L189 [5] https://review.openstack.org/#/c/280595/
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Thomas Goirand
tl;dr: You're right, but the point I was making was that all distros are
understaff.

Longer version:

On 04/19/2016 06:24 PM, Ian Cordasco wrote:
>> You can also add "Ubuntu" in the list here, as absolutely all OpenStack
>> dependencies are maintained mostly by me, within Debian, and then later
> 
> "absolutely all" of OpenStack's dependencies are not maintained by you
> in Debian. A significant number are maintained by the DPMT (Debian
> Python Modules Team). The large majority are maintained by you, but not
> "absolutely all".

That's absolutely right. Though, you're probably missing my point here.

I'll explain the situation in more details, to give justice to everyone.

Ubuntu OpenStack packaging is largely done in Debian. These days, Corey
Briant and David Dellav (who are both Canonical employees) are pushing
directly to the Git on alioth.debian.org. Unfortunately, they don't have
upload rights to Debian, so I have to review and upload their work.
Later on, the packages are synched from Debian to Ubuntu (if they didn't
push it directly to avoid waiting, as I sometime can't be as reactive as
I would like).

What I called "absolutely all OpenStack dependencies" was referring to
all the Python modules that OpenStack produces (Oslo, python-*client,
and all the other libs). So in that way, my sentence was correct.
However, there's a few general purpose libraries maintained within the
DPMT (Debian Python Module Team). Though there's also 100+ general
purpose python modules maintained within the PKG OpenStack team on
Alioth as well. It's hard to draw a clear line anyway.

And to forget nobody, I'd like to salute Ondřej Nový work, who
completely took over all things for Swift, and slowly shifted to do more
and more packaging work. I also just gave all ACLs to my colleague Ivan,
and we decided he would take care of all things for Horizon, which is *a
lot* of work.

So anyway, let's go back to my point of argumentation and stop
digressing! :)

The point that I was making is that I know that there's not more
staffing in Ubuntu either (since we package together everything which is
not server packages), and we're probably at approximately the same
number of people in Debian/Ubuntu as for RDO: around 2 and a half person
full time, with a bit of contributions here and there.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Kevin L. Mitchell
On Wed, 2016-04-20 at 14:10 -0600, Boden Russell wrote:
> Anyone adverse to me crafting an initial oslo patch to kick-off the
> details on this one?

Have you evaluated any existing solutions in this space?  A quick search
on PyPi turns up "backoff", which seems to provide several backoff
strategies.  Of course, it's a decorator-based backoff implementation,
whereas I tend to implement iterator-based solutions, but there may
already be a solution in that space as well…
-- 
Kevin L. Mitchell 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Davanum Srinivas
Sounds good to me Boden.

-- Dims

On Wed, Apr 20, 2016 at 4:10 PM, Boden Russell  wrote:
> Today there are a number of places in nova, neutron and perhaps
> elsewhere that employ backoff + timeout strategies (see [1] - [4]).
> While we are working towards a unified approach in neutron for RPC [5],
> it appears such logic could benefit the greater community as a reusable
> oslo implementation.
>
> IMHO such an oslo implementation could:
> - Enable backoff/timeout irrespective of client/transport. This would
> allow the utils to be used with various clients (HTTP, AMQP RPC, etc.).
> - Support namespacing as inspired by the existing neutron patch [5].
> - In the future, perhaps, allow multiple (pluggable) backoff strategies
> to be used (e.g. configurable backoff).
>
> Anyone adverse to me crafting an initial oslo patch to kick-off the
> details on this one?
>
>
> Thanks
>
>
> [1] https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L162
> [2]
> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/scheduler/utils.py#L356
> [3]
> https://github.com/openstack/neutron/blob/dd4f1253c951d78a5b497680dfb31317ba469a58/neutron/agent/l3/agent.py#L224
> [4]
> https://github.com/openstack/neutron/blob/42c6f05f2904cd4c01bcd3f79b1966489f5ad3c1/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L189
> [5] https://review.openstack.org/#/c/280595/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-20 Thread Morgan Fainberg
On Wed, Apr 20, 2016 at 10:28 AM, Dean Troyer  wrote:

> On Wed, Apr 20, 2016 at 9:43 AM, Doug Hellmann 
> wrote:
>
>> Cliff looks for commands on demand. If we modify it's command loader to
>> support some "built in" commands, and then implement the commands in OSC
>> that way, we can avoid scanning the real plugin system until we hit a
>> command that isn't built in.
>>
>
> Maybe down the road once this becomes a bigger percentage of the startup
> time, for now I do not (yet) believe the plugins are the problem others
> do.  See below...
>
>
>> The last time I proposed that, though, someone (Dean?) pointed out that
>> loading the plugins wasn't actually where OSC was spending its time. So,
>> maybe we should profile the thing before proposing code changes.
>>
>
> It has been a while since I looked into this in detail, we made a couple
> of changes then that helped, and since then the world kept moving and we're
> behind again. Right now OSC is a mess WRT keystoneclient/keystoneauth and
> their interactions with os-client-config.  We have lost sight of who is
> supposed to be doing what here.  I know for a fact that there is
> duplication in auth setup, we often make duplicate REST auth calls.
>
> OSC 2.4.0 should be released today/tomorrow, following which we begin
> merging the cleanup starting with the ksc/ksa bits.
>
> Here are a couple of things to consider for those who want to investigate
> further:
> * OSC does not load _any_ project client (modulo the ksc bit mentioned
> above) unless/until it is actually needed to make a REST call.
> * Timing on a help command includes a complete scan of all entry points to
> generate the list of commands
> * The --timing option lists all REST calls that properly go through our
> TimingSession object.  That should be all of them unless a library doesn't
> use the session it is given (the ones used by the commands in the OSC repo
> all do this correctly).
> * Interactive mode can be useful to get timing on just the setup/teardown
> process without actually running a command:
>
>   time openstack 
> So time for a couple of fun baselines, using the OSC SHA proposed for
> 2.4.0 (4639148b1d) on a Lenovo T420s with Ubuntu 14.0.4 against a DevStack
> on an Ubuntu 14.0.3 VM on a moderately-sized local NUC:
>
> * time openstack --timing* py2: 0m0.307s
>   * py3: 0m0.376s
>
> * time openstack --timing help
>   * py2: 0m1.939s
>   * py3: 0m1.803s
>
> * time openstack --timing catalog list
>   * py2: 0m0.675s - 0.360 REST = 0.315s
>   * py3: 0m0.704s - 0.312 REST = 0.392s
>
> * time openstack --timing flavor list
>   * py2: 0m0.772s - 0.447 REST = 0.325s
>   * py3: 0m2.563s - 2.146 REST = 0.417s
>
> * time openstack --timing image list
>   * py2: 0m0.860s - 0.517 REST = 0.343s
>   * py3: 0m0.952s - 0.539 REST = 0.423s
>
> Are there any conclusions to draw from this seat-of-the-pants look?
>
> * The differences between py2 and py3 are small, and not consistent.
> * The time for OSC to load and then exit immediately is within 0.1s of the
> time to execute a near-trivial command when the REST round trip times are
> removed.
> * Two auth calls are consistently being made, this is one of the things
> actively being cleaned up with the ksc/ksa transition bits.  The additional
> REST round trip in these tests is consistently between 0.14s and 0.2s, so
> that gain will come soon.
>
> I also have buried in my notes some out-of-date results of using
> boris-42's profimp that lead me to keystoneclient and the largest single
> static import being done and accounting for nearly half of the total static
> load time.  The transition to using ksa will help here, I do not have
> profimp numbers for that yet.
>
> dt
>
>
This was one of the driving forces behind the move to KSA in general,
eliminating and isolating the auth library to as small of a footprint as
possible.

There is still some loading overhead due to the auth plugin system (hard to
eliminate completely), but simple back-of-the-napkin numbers we did
initially showed KeystoneAuth way ahead in the game (and much easier to
improve upon since it is not tied to keystone CRUD interactions to get auth
performance improvements released)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposed changes to TripleO project tags

2016-04-20 Thread James Slagle
On Wed, Apr 20, 2016 at 6:16 AM, Steven Hardy  wrote:
> All,
>
> We discussed some changes to our release cycle in the weekly meeting
> yesterday, namely to align with the intended direction of the puppet
> community to adopt the new cycle-trailing tag[1].
>
> We have also discussed and agreed[2] to adopt the standard stable branch
> policy[3] for those repos where we maintain stable/mitaka branches (from
> mitaka onwards, superseding the "release branch" experimient we tried
> for stable/liberty)[4].
>
> To reflect these changes via the governance repo, I've proposed a patch[5]
> that changes the tags to release:cycle-trailing and stable:follows-policy
> for those repos where it is appropriate.
>
> Please provide feedback on that review if you have any comments, questions
> or concerns about this, thanks!
>
> Steve

Thanks, I also pushed a followup patch to move the other projects from
release:independent to release:cycle-with-intermediary. However, I
left diskimage-builder and dib-utils as release:independent for now
pending the discussion in the dib thread.

https://review.openstack.org/308574



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-20 Thread Boden Russell
Today there are a number of places in nova, neutron and perhaps
elsewhere that employ backoff + timeout strategies (see [1] - [4]).
While we are working towards a unified approach in neutron for RPC [5],
it appears such logic could benefit the greater community as a reusable
oslo implementation.

IMHO such an oslo implementation could:
- Enable backoff/timeout irrespective of client/transport. This would
allow the utils to be used with various clients (HTTP, AMQP RPC, etc.).
- Support namespacing as inspired by the existing neutron patch [5].
- In the future, perhaps, allow multiple (pluggable) backoff strategies
to be used (e.g. configurable backoff).

Anyone adverse to me crafting an initial oslo patch to kick-off the
details on this one?


Thanks


[1] https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L162
[2]
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/scheduler/utils.py#L356
[3]
https://github.com/openstack/neutron/blob/dd4f1253c951d78a5b497680dfb31317ba469a58/neutron/agent/l3/agent.py#L224
[4]
https://github.com/openstack/neutron/blob/42c6f05f2904cd4c01bcd3f79b1966489f5ad3c1/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L189
[5] https://review.openstack.org/#/c/280595/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-sfc] A standards-compliant SFC API

2016-04-20 Thread Duarte Cardoso, Igor
Thanks for the feedback Armando,

Adding missing tag.

Best regards,
Igor.

From: Armando M. [mailto:arma...@gmail.com]
Sent: Wednesday, April 20, 2016 6:03 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron][sfc] A standards-compliant SFC API


On 20 April 2016 at 09:31, Duarte Cardoso, Igor 
> wrote:
Dear OpenStack Community,

We've been investigating options in/around OpenStack for supporting Service 
Function Chaining. The networking-sfc project has made significant progress in 
this space, and we see lots of value in what has been completed. However, when 
we looked at the related IETF specs on SFC we concluded that there would be 
value in further developing an SFC API and related classification functionality 
to enhance the alignment between the work in the OpenStack community with the 
standards work. We would like to propose the SFC part as a potential 
networking-sfc v2 API, but are open to other options too based on your feedback.

I have submitted a spec to the neutron-specs repo [1], where you can check what 
our initial thoughts for this new API are, and provide your feedback or 
questions regarding the same.

Your thoughts on this are deeply appreciated. We are looking forward to having 
further discussions with everyone interested in giving feedback or establishing 
collaborations during the OpenStack Summit in Austin.

[1] https://review.openstack.org/#/c/308453

Thanks for reaching out.

The networking-sfc initiative so far has been pretty autonomous. The project 
has its own launchpad project [1] and its own docs to document APIs and 
proposals [2]. During the long journey that Neutron has been through, we have 
been adjusting how to manage the project in order to strike a good balance 
between development agility, product stability and community needs. We're 
always looking forward to improving that balance and this means that how we 
track certain initiatives may evolve in the future. For now, it's probably best 
to target the mailing list with tag [networking-sfc] (in addition to neutron), 
as well as the project noted below.

[1] https://launchpad.net/networking-sfc
[2] http://docs.openstack.org/developer/networking-sfc/


Thank you,
Igor & the Intel OpenStack networking team.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Ricardo Rocha
Hi Hongbin.

On Wed, Apr 20, 2016 at 8:13 PM, Hongbin Lu  wrote:
>
>
>
>
> From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
> [mailto:li-gong.d...@hpe.com]
> Sent: April-20-16 3:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision
> minion nodes
>
>
>
> Hi Folks,
>
>
>
> We are considering whether Magnum can supports 2 Nova flavors to provision
> Kubernetes and other COE minion nodes.
>
> This requirement comes from the below use cases:
>
> -  There are 2 kind of baremetal machines in customer site: one is
> legacy machines which doesn’t support UEFI secure boot and others are new
> machines which support UEFI secure boot. User want to use Magnum to
> provisions a Magnum bay of Kubernetes from these 2 kind of baremetal
> machines and for the machines supporting secure boot, user wants to use UEFI
> secure boot to boot them up. And 2 Kubernetes label(secure-booted and
> non-secure-booted) are created and User can deploy their
> data-senstive/cirtical workload/containers/pods on the baremetal machines
> which are secure-booted.
>
>
>
> This requirement requires Magnum to supports 2 Nova flavors(one is
> “extra_spec: secure_boot=True” and the other doesn’t specify it) based on
> the Ironic
> feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
> ).
>
>
>
> Could you kindly give me some comments on these requirement or whether it is
> reasonable from your point? If you agree, we can write design spec and
> implement this feature?
>
>
>
> I think the requirement is reasonable, but I would like to solve the problem
> in a generic way. In particular, there could be another user who might ask
> for N nova flavors to provision COE nodes in the future. A challenge to
> support N groups of Nova instances is how to express arbitrary number of
> resource groups (with different flavors) in a Heat template (Magnum uses
> Heat template to provision COE clusters). Heat doesn’t seem to support the
> logic of looping from 1 to N. There could be other challenges/complexities
> along the way. If the proposed design can address all the challenges and the
> implementation is clean, I am OK to add support for this feature. Thoughts
> from others?

This looks similar to the way we looked at passing a list of
availability zones. Mathieu asked and got a good answer:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088175.html

Something similar can probably be used to pass multiple flavors? Just
in case it helps.

Cheers,
  Ricardo

>
>
>
> Regards,
>
> Gary
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Ricardo Rocha
Hi.

On Wed, Apr 20, 2016 at 5:43 PM, Fox, Kevin M  wrote:
> If the ops are deploying a cloud big enough to run into that problem, I
> think they can deploy a scaled out docker registry of some kind too, that
> the images can point to? Last I looked, it didn't seem very difficult. The
> native docker registry has ceph support now, so if your running ceph for the
> backend, you can put an instance on each controller and have it stateless I
> think.

This is what we did, using registry v2. There's an issue to pull from
a v2 registry anonymously:
https://github.com/docker/docker/issues/17317

but we've setup a dummy account to do it. Both this account and any
required CA certs can be configured by the operator, which was the
reasoning to propose (we patch the templates for now):
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

Allowing an optional prefix to pull from a local registry sounds reasonable.

Cheers,
  Ricardo

>
> Either way you would be hammering some storage service. Either glance or
> docker registry.
>
> Thanks,
> Kevin
> 
> From: Guz Egor [guz_e...@yahoo.com]
> Sent: Tuesday, April 19, 2016 7:20 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Fox, Kevin M
> Subject: Re: [openstack-dev] [Magnum]Cache docker images
>
> Kevin,
>
> I agree this is not ideal solution, but it's probably the best option to
> deal with public cloud "stability" (e.g. we switched to the same model at
> AWS and
> got really good boost in provisioning time and reduce # failures during
> cluster provisioning). And if application need guarantee "fresh" image, it
> uses
> force pull option in Marathon.
>
> ---
> Egor
>
> 
> From: "Fox, Kevin M" 
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Sent: Tuesday, April 19, 2016 1:04 PM
>
> Subject: Re: [openstack-dev] [Magnum]Cache docker images
>
> I'm kind of uncomfortable as an op with the prebundled stuff. how do you
> upgrade things when needed if there is no way to pull updated images from a
> central place?
>
> Thanks,
> Kevin
> 
> From: Hongbin Lu [hongbin...@huawei.com]
> Sent: Tuesday, April 19, 2016 11:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum]Cache docker images
>
> Eli,
>
> The approach of pre-pulling docker images has a problem. It only works for
> specific docker storage driver. In comparison, the tar file approach is
> portable across different storage drivers.
>
> Best regards,
> Hongbin
>
> From: taget [mailto:qiaoliy...@gmail.com]
> Sent: April-19-16 4:26 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Magnum]Cache docker images
>
> hi hello again
>
> I believe you are talking about this bp
> https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
> then ignore my previous reply, that may another topic to solve network
> limited problem.
>
> I think you are on the right way to build docker images but this image could
> only bootstrap by cloud-init, without cloud-init
> the container image tar file are not loaded at all, but seems this may not
> be the best way.
>
> I'v suggest that may be the best way is we pull docker images while building
> atomic-image. Per my understanding, the
> image build process is we mount the image to read/write mode to some tmp
> directory and chroot to to that dircetory,
> we can do some custome operation there.
>
> I can do a try on the build progress(guess rpm-ostree should support some
> hook scripts)
>
> On 2016年04月19日 11:41, Eli Qiao wrote:
>
> @wanghua
>
> I think there were some discussion already , check
> https://blueprints.launchpad.net/magnum/+spec/support-private-registry
> and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
> On 2016年04月19日 10:57, 王华 wrote:
>
> Hi all,
>
> We want to eliminate pulling docker images over the Internet on bay
> provisioning. There are two problems of this approach:
> 1. Pulling docker images over the Internet is slow and fragile.
> 2. Some clouds don't have external Internet access.
>
> It is suggested to build all the required images into the cloud images to
> resolved the issue.
>
> Here is a solution:
> We export the docker images as tar files, and put the tar files into a dir
> in the image when we build the image. And we add scripts to load the tar
> files in cloud-init, so that we don't need to download the docker images.
>
> Any advice for this solution or any better solution?
>
> Regards,
> Wanghua
>
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
>
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
>
> Best 

Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-20-16 9:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum]Cache docker images

Hongbin,

Both of approaches you suggested may only work for one binary format. If you 
try to use docker on a different system architecture, the pre-cache of images 
makes it even more difficult to get the correct images built and loaded.

I assume there are ways to detect the system architecture and kernel 
information when we are using disk-imagebuilder to build the image? If yes, we 
can catch the mismatch of system architecture and/or other kernel compatibility 
issues at the early stage.

I suggest we take an approach that allows the Baymodel creator to specify a 
docker registry and/or prefix that will determine where docker images are 
pulled from if they are not found in the local cache. That would give cloud 
operators the option to set up such a registry locally and populate it with the 
right images. This approach would also make it easier to customize the Magnum 
setup by tweaking the container images prior to use.

Works for me.


Thanks,

Adrian

On Apr 19, 2016, at 11:58 AM, Hongbin Lu 
> wrote:
Eli,

The approach of pre-pulling docker images has a problem. It only works for 
specific docker storage driver. In comparison, the tar file approach is 
portable across different storage drivers.

Best regards,
Hongbin

From: taget [mailto:qiaoliy...@gmail.com]
Sent: April-19-16 4:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum]Cache docker images

hi hello again

I believe you are talking about this bp 
https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
then ignore my previous reply, that may another topic to solve network limited 
problem.

I think you are on the right way to build docker images but this image could 
only bootstrap by cloud-init, without cloud-init
the container image tar file are not loaded at all, but seems this may not be 
the best way.

I'v suggest that may be the best way is we pull docker images while building 
atomic-image. Per my understanding, the
image build process is we mount the image to read/write mode to some tmp 
directory and chroot to to that dircetory,
we can do some custome operation there.

I can do a try on the build progress(guess rpm-ostree should support some hook 
scripts)


On 2016年04月19日 11:41, Eli Qiao wrote:
@wanghua

I think there were some discussion already , check 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry
and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
On 2016年04月19日 10:57, 王华 wrote:
Hi all,

We want to eliminate pulling docker images over the Internet on bay 
provisioning. There are two problems of this approach:
1. Pulling docker images over the Internet is slow and fragile.
2. Some clouds don't have external Internet access.

It is suggested to build all the required images into the cloud images to 
resolved the issue.

Here is a solution:
We export the docker images as tar files, and put the tar files into a dir in 
the image when we build the image. And we add scripts to load the tar files in 
cloud-init, so that we don't need to download the docker images.

Any advice for this solution or any better solution?

Regards,
Wanghua





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best Regards, Eli Qiao (乔立勇)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Hongbin Lu


> -Original Message-
> From: Ian Cordasco [mailto:sigmaviru...@gmail.com]
> Sent: April-20-16 1:56 PM
> To: Adrian Otto; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> -Original Message-
> From: Adrian Otto 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: April 19, 2016 at 19:11:07
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>   abstraction for all COEs
> 
> > This pursuit is a trap. Magnum should focus on making native
> container APIs available.
> > We should not wrap APIs with leaky abstractions. The lowest common
> > denominator of all COEs is an remarkably low value API that adds
> > considerable complexity to Magnum that will not strategically advance
> > OpenStack. If we instead focus our effort on making the COEs work
> > better on OpenStack, that would be a winning strategy. Support and
> compliment our various COE ecosystems.
> 
> I'm not nearly as familiar with each COE's API as I'm sure some of you
> are, but knowing the present quality of Magnum's documentation, the
> fact that it's API is not being documented well, and how thinly
> stretched most of Magnum's developers already seem to be, I think that
> not doing this now (or in the near term is a better option). First, all
> of the COEs magnum supports have excellent documentation around their
> API and clients. Magnum does not have that at present and would need to
> work on that for this effort to be worthwhile to Magnum's users. Second
> of all, what little I do know about each COE's API reinforces (in my
> opinion) what Adrian has stated above. Finally, it seems like there are
> too many focuses for development at the moment (between trying to
> improve gating to allow for multiple supported distributions by default,
> eliminating the hard reliance on barbican, creating a versioned and
> stable API, and other efforts) for the API design to be done well and
> documented well. Frankly, I think the magnum team should be focusing on
> 1 thing as their top priority right now and have a secondary priority.
> What those priorities are, is up to the community, but I don't think
> this should be one of those priorities as someone watching the
> community to evaluate it's direction and the potential future of Magnum
> inside a product.

I think we are debating the long-term direction of the project (not the 
short-term priorities of individual tasks). The decision will have an impact on 
the scope of Magnum (to be a COE deployment tool or a container lifecycle 
management service). Maybe your suggestion is not to make such decision in 
short term? If yes, could you elaborate?

> 
> --
> Ian Cordasco
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Fox, Kevin M
I'll go ahead and be the guy to ask for N flavors. :)

AZ zones are kind of restrictive in what they can do, so we usually use 
flavors, which are much more flexable.

I can totally see a project with 3 different types of flavors and want them all 
in the same k8s cluster managed by labels.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, April 20, 2016 11:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
provision minion nodes



From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) [mailto:li-gong.d...@hpe.com]
Sent: April-20-16 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision 
minion nodes

Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision 
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:

-  There are 2 kind of baremetal machines in customer site: one is 
legacy machines which doesn’t support UEFI secure boot and others are new 
machines which support UEFI secure boot. User want to use Magnum to provisions 
a Magnum bay of Kubernetes from these 2 kind of baremetal machines and for the 
machines supporting secure boot, user wants to use UEFI secure boot to boot 
them up. And 2 Kubernetes label(secure-booted and non-secure-booted) are 
created and User can deploy their data-senstive/cirtical 
workload/containers/pods on the baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is “extra_spec: 
secure_boot=True” and the other doesn’t specify it) based on the Ironic 
feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it is 
reasonable from your point? If you agree, we can write design spec and 
implement this feature?

I think the requirement is reasonable, but I would like to solve the problem in 
a generic way. In particular, there could be another user who might ask for N 
nova flavors to provision COE nodes in the future. A challenge to support N 
groups of Nova instances is how to express arbitrary number of resource groups 
(with different flavors) in a Heat template (Magnum uses Heat template to 
provision COE clusters). Heat doesn’t seem to support the logic of looping from 
1 to N. There could be other challenges/complexities along the way. If the 
proposed design can address all the challenges and the implementation is clean, 
I am OK to add support for this feature. Thoughts from others?

Regards,
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic-staging-drivers] Tests at the gates

2016-04-20 Thread Andreas Jaeger
On 04/20/2016 04:57 PM, Vasyl Saienko wrote:
> Hello Ironic-staging-drivers team,
> 
> At the moment there is no tests for ironic-staging-drivers at the gates.
> I think we need to have a simple test that install drivers with theirs
> dependencies and ensures that ironic-conductor is able to start.
> It may be performed in the following way. Each staging driver contain
> two files:
> 
>   * python-requirements.txt - file for python libraries
>   * other-requirements.sh - script that will install all non-python

The file other-requirements.txt is already one way to install additional
packages, just use that one. Best ask on #openstack-infra for details,

Andreas

> driver requirements.
> 
> During devstack installation phase for each driver we launch:
> 
>   * pip install -r
> 
> ironic-staging-drivers/ironic-staging-drivers/$driver/python-requirements.txt
>   * bash
> 
> ironic-staging-drivers/ironic-staging-drivers/$driver/other-requirements.sh
>   * add drivers to enabled_driver list
> 
> At the end ironic will try to register a node with some *_ssh driver. So
> if it succeed it means that conductor with staging drivers has started
> successfully.
> 
> The devstack plugin is on review already:
> https://review.openstack.org/#/c/299229/
> 
> Sincerely,
> Vasyl Saienko
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet-openstack-core

2016-04-20 Thread Cody Herriges
> On Apr 18, 2016, at 8:37 AM, Sebastien Badia  wrote:
> 
> Hello here,
> 
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
> 
> I lack dedicated time to contribute on my spare time to the project. And I
> don't work anymore on OpenStack deployments.
> 
> In the past months, I stopped reviewing and submitting changes on our project,
> that's why I slopes down gradually into the abyss stats of the group :-)
> Community coc¹ suggests I step down considerately.
> 
> I've never been very talkative, but retrospectively it was a great adventure, 
> I
> learned a lot at your side. I'm very proud to see where the project is now.
> 
> So Long, and Thanks for All the Fish
> I whish you the best ♥
> 

Thanks you.  Your help was much appreciated as I was going through the process 
of re-integrating and trying to ramp up my involvement in the community.


--
Cody




signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Hongbin Lu


From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) [mailto:li-gong.d...@hpe.com]
Sent: April-20-16 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision 
minion nodes

Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision 
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:

-  There are 2 kind of baremetal machines in customer site: one is 
legacy machines which doesn't support UEFI secure boot and others are new 
machines which support UEFI secure boot. User want to use Magnum to provisions 
a Magnum bay of Kubernetes from these 2 kind of baremetal machines and for the 
machines supporting secure boot, user wants to use UEFI secure boot to boot 
them up. And 2 Kubernetes label(secure-booted and non-secure-booted) are 
created and User can deploy their data-senstive/cirtical 
workload/containers/pods on the baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is "extra_spec: 
secure_boot=True" and the other doesn't specify it) based on the Ironic 
feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it is 
reasonable from your point? If you agree, we can write design spec and 
implement this feature?

I think the requirement is reasonable, but I would like to solve the problem in 
a generic way. In particular, there could be another user who might ask for N 
nova flavors to provision COE nodes in the future. A challenge to support N 
groups of Nova instances is how to express arbitrary number of resource groups 
(with different flavors) in a Heat template (Magnum uses Heat template to 
provision COE clusters). Heat doesn't seem to support the logic of looping from 
1 to N. There could be other challenges/complexities along the way. If the 
proposed design can address all the challenges and the implementation is clean, 
I am OK to add support for this feature. Thoughts from others?

Regards,
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Summit session clashes

2016-04-20 Thread Jay Dobies

[snip]


I need to be at both of those Heat ones anyway, so this doesn't really
help me. I'd rather have the DLM session in this slot instead. (The only
sessions I can really skip are the Release Model, Functional Tests and
DLM.) That would give us:

   HeatTripleO

  Wed 3:30 Release Model
  Wed 4:30 HOT Parser
  Wed 5:20 Functional Tests

  Thu 1:30 DLM Upgrades
  Thu 2:20 Convergence switchover  Containers
  Thu 3:10 Convergence cleanup Composable Roles
  Thu 4:10 Performance API
  Thu 5:00 Validation  CI


+1 from me, this will let me bounce between the two as well.


I think that way Steve and I could probably both cover upgrades, and he
could cover the rest.

I'd like to get to the composable roles and containers sessions too, but
we'd have to rejig basically every Heat session and I think it's too
late to be doing that.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][quality] hacking 0.11.0 release

2016-04-20 Thread no-reply
We are content to announce the release of:

hacking 0.11.0: OpenStack Hacking Guideline Enforcement

With package available at:

https://pypi.python.org/pypi/hacking

For more details, please see below.

Changes in hacking 0.10.1..0.11.0
-

c3b03a9 Updated from global requirements
4cffaaf Update the assertRaises example to be more modern
c11724e Updated from global requirements
b011ebb Run all flake8 checks
b8e3ce2 Updated from global requirements
44718ab Updated from global requirements
79ce68e Update flake8 requirement for 0.11.0
1bfbe65 Updated from global requirements
77f9bdf Avoid git clone in integration test
20b96f9 Remove oslo-incubator section in HACKING.rst
7f496e2 Do not call "rm -rf" on arbitrary user input
42ca927 Update setup.cfg
a73d9c2 Put py34 first in the env order of tox
3ab47b3 Minor text cleanup
fc098c4 Deprecated tox -downloadcache option removed
f56de8f Updated from global requirements
87c4c03 Update docs for i18n exception
2af1f6b Updated from global requirements
3791fd6 Updated from global requirements
394d7eb Updated from global requirements
d53a948 Updated from global requirements
966e7b4 Add editor files to .gitignore.
a9a5e1d Take advantage of RST formatting for links
9c20ad8 Revise adding additional checks section of README
3a5f20f Add example to versioning section of README
6db5a22 Expand origin section of README
0c4aafb Document the reasoning behind general hacking rules
93d7f7f Expand depends documentation in readme
12938a1 Fix a typo
b65962a Add support for flake8 off_by_default for optional checks
0364969 Updated from global requirements
f5349f8 whitelist bash in integration test
6c14e1c Drop py33 as a target
3f3f2b4 Don't shallow clone from file:// in integration test
77e1ba8 Updated from global requirements
bedf520 Updated from global requirements
b91ff68 Change integration test to use  git.o.o
4fce727 tests: allow to run the same test for different code
4e09468 Updated from global requirements
3e3322d Fix string formatting in example
e73da20 Updated from global requirements
8d5dedc Updated from global requirements
789aba5 H105: also check for Authors and authors

Diffstat (except docs and test files)
-

.gitignore|  3 ++
HACKING.rst   | 58 ---
README.rst| 52 +--
hacking/checks/comments.py|  6 ++--
hacking/checks/vim_check.py   | 39 +++
hacking/core.py   | 15 +
integration-test/test.sh  | 48 ++---
requirements.txt  | 10 +++---
setup.cfg |  4 ++-
setup.py  |  3 +-
test-requirements.txt | 22 ++---
tox.ini   |  5 +--
14 files changed, 196 insertions(+), 84 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index f3f606a..7146fce 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4 +4 @@
-pbr>=0.6,!=0.7,<1.0
+pbr>=1.6 # Apache-2.0
@@ -6,3 +6,3 @@ pbr>=0.6,!=0.7,<1.0
-pep8==1.5.7
-pyflakes==0.8.1
-flake8==2.2.4
+pep8==1.5.7 # MIT
+pyflakes==0.8.1 # MIT
+flake8<2.6.0,>=2.5.4 # MIT
@@ -11 +11 @@ mccabe==0.2.1 # MIT License
-six>=1.7.0
+six>=1.9.0 # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index 314af22..39fd85b 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4,10 +4,10 @@
-coverage>=3.6
-discover
-fixtures>=0.3.14
-mock>=1.0
-python-subunit>=0.0.18
-sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
-oslosphinx>=2.2.0  # Apache-2.0
-testrepository>=0.0.18
-testscenarios>=0.4
-testtools>=0.9.36,!=1.2.0
+coverage>=3.6 # Apache-2.0
+discover # BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+mock>=1.2 # BSD
+python-subunit>=0.0.18 # Apache-2.0/BSD
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+testrepository>=0.0.18 # Apache-2.0/BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
@@ -20 +20 @@ testtools>=0.9.36,!=1.2.0
-eventlet>=0.16.1
+eventlet!=0.18.3,>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Ian Cordasco
-Original Message-
From: Adrian Otto 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: April 19, 2016 at 19:11:07
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [magnum][app-catalog][all] Build unified  
abstraction for all COEs

> This pursuit is a trap. Magnum should focus on making native container APIs 
> available.  
> We should not wrap APIs with leaky abstractions. The lowest common 
> denominator of all  
> COEs is an remarkably low value API that adds considerable complexity to 
> Magnum that  
> will not strategically advance OpenStack. If we instead focus our effort on 
> making the  
> COEs work better on OpenStack, that would be a winning strategy. Support and 
> compliment  
> our various COE ecosystems.

I'm not nearly as familiar with each COE's API as I'm sure some of you are, but 
knowing the present quality of Magnum's documentation, the fact that it's API 
is not being documented well, and how thinly stretched most of Magnum's 
developers already seem to be, I think that not doing this now (or in the near 
term is a better option). First, all of the COEs magnum supports have excellent 
documentation around their API and clients. Magnum does not have that at 
present and would need to work on that for this effort to be worthwhile to 
Magnum's users. Second of all, what little I do know about each COE's API 
reinforces (in my opinion) what Adrian has stated above. Finally, it seems like 
there are too many focuses for development at the moment (between trying to 
improve gating to allow for multiple supported distributions by default, 
eliminating the hard reliance on barbican, creating a versioned and stable API, 
and other efforts) for the API design to be done well and documented well. 
Frankly, I think the magnum team should be focusing on 1 thing as their top 
priority right now and have a secondary priority. What those priorities are, is 
up to the community, but I don't think this should be one of those priorities 
as someone watching the community to evaluate it's direction and the potential 
future of Magnum inside a product.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]informal meetup during summit

2016-04-20 Thread Jay Dobies



On 4/20/16 1:00 PM, Rico Lin wrote:

Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.

After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or some food. Would like to ask if anyone
knows any nice place for this meetup?:)

Also open for other chance for all can go out for a nice dinner and
beer. Right now seems maybe Monday or Friday night could be the best
candidate for this wonderful task, what all think about this? :)


I really like both of these ideas. I haven't met most of you and it'll 
be good to see everyone in a non-Heat light.


I'm available both Monday and Friday nights. I haven't looked at the 
schedule for Monday night to see what else is planned, but that's my 
vote since I suspect people may be leaving on Friday night.





--
May The Force of OpenStack Be With You,

*/Rico Lin
Chief OpenStack Technologist, inwinSTACK
/*irc: ricolin




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] upgrade support between which versions of ironic?

2016-04-20 Thread Devananda van der Veen
On Wed, Apr 20, 2016 at 5:38 AM, Mathieu Mitchell 
wrote:

>
>
> On 2016-04-19 11:29 PM, Tan, Lin wrote:
>
>> I agree this is reasonable to support all these cases in “cold upgrades”
>> but in supports-rolling-upgrade (live upgrade in another word) case it is
>> different and complicated and not necessary,
>>
>> During rolling upgrade, we will have old/new services co-existed, and we
>> need to make services compatible which need some extra code work and this
>> is the main purpose of spec [1]. And as far as I can see, we are  not
>> allowed to skip over releases when rolling upgrading.  So my point is
>> support name release is enough.
>>
>> 1. Because even if we want to support major number release, admins have
>> to upgrade from 5.0 -> 6.0 then 6.0 -> 7.0 in Ruby’s case of 5.0.0, 5.1.0
>> == Mitaka, 5.2.0, 6.0.0, 6.1.0, 7.0.0, 7.1.0, 7.2.0 == Newton. And we might
>> have a higher release frequency in the future. So it’s too much work for
>> upgrade a service every six months.
>>
>> 2. As we usually rolling upgrade the whole cloud, not for ironic only.
>> For example, other projects will upgrade from Mitaka to Netwon, there is
>> not much sense to upgrade Ironic from 5.0 -> 6.0 only.
>>
>>
> As an operator, I disagree with that statement. We follow different
> upgrade paths for Ironic and Glance for example. My area of concern around
> Ironic is compatibility with Nova and Neutron. If we can prove via CI that
> Nova on an older version still works with Ironic on master and vice versa,
> we will successfully avoid having to do them in a lockstep.


I agree that we need to test this scenario and assert via CI that it is
possible to upgrade Nova and Ironic separately, and as we're adding Neutron
integration, we will need to assert the same thing there. Let's call this a
"cloud rolling upgrade". We'll want to run this test on changes in Nova as
well.

We can test that with the grenade partial job. I do not think we need to
test the upgrade sequence in both directions, though -- as we integrate
with more services, that would explode exponentially. Instead, we should
proscribe an order to the upgrades (and document it clearly) and then test
that ordering. For starters, I would upgrade Ironic first (our API needs to
remain backwards compatible) and then upgrade Nova. That said, I'm not sure
how adding Neutron, or eventually Cinder, will affect the upgrade sequence.

Just to be clear, this doesn't replace the need to assert a "service
rolling upgrade", eg. where different releases of the Ironic API and
Conductor services are run at the same time, but we could do that in a test
environment that is only running Ironic, and wouldn't need to trigger that
test on changes in Nova, for example.

--deva
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Freezer] No IRC meeting next week and summit details

2016-04-20 Thread Mathieu, Pierre-Arthur
Hello, 

There will be no IRC meeting next week due to the OpenStack summit.

Feel free to join us at one of the four Design summit session we will be 
holding:
 - Wed: 9:50 - 10:30: Backup your OpenStack infrastructure [1]
 - Wed: 11:00 - 11:40: Backup as a service [2]
 - Wed: 11:50 - 12:30: Disaster Recovery [3]
 - Fri: 9:00 - 12:30: Contributors meetup [4]

Etherpads will be available and updated for all these sessions should you want 
to participate without being present.

We will also deliver a presentation around : "Freezer: From Backup/Restore aaS 
to a fully DR solution for OpenStack" on Tuesday. [5]

We hope to meet you there !

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9188
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9387
[3] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9388
[4] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9436
[5] https://www.openstack.org/summit/austin-2016/summit-schedule/events/7211


Best Regards,
- Pierre












__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators] [glance] [Austin summit] Glance session for operator feedback

2016-04-20 Thread Nikhil Komawar
NOTE: this is a operator focused session and has been tagged for ops for
it to appear in cross track!

On 4/20/16 1:39 PM, Nikhil Komawar wrote:
> Hi all,
>
> At the Austin summit, I've scheduled a Glance work session [1] for
> gathering input on Glance deployments and feedback surrounding the same.
> Also, I've taken the liberty to propose a few topics related to the same
> at the discussion etherpad [2]. These are general discussion items that
> you may want to already bring up however, it's not the agenda. We plan
> to have a round table, free form session around different flavors and
> scales of deployments and help guide the project in a way that may work
> best for us all.
>
> Pasting the topics here:
>
>   * What's the strategy used for Glance deployments?
>   o How many use public vs. private glance installations?
>   o How many use glance registry?
>   * Glance DB support
>   o What databases are currently being used?
>   o Who are using Galera?
>   * Nova/Ironic image cache usage, glance image cache, performance,
> stability/reliability, etc
>   * Deprecation of glance-registry service, any hurdles?
>
> Please note: this is email thread for informational purposes and
> feedback on the session scheduling only. If you'd like to discuss one or
> more of the items, feel free to create a separate email thread for the
> same. If not, we will plan to start threads after the summit when we
> have more feedback gathered.
>
> [1]
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9280?goback=1
> [2] https://etherpad.openstack.org/p/newton-glance-operator-feedback
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators] [glance] [Austin summit] Glance session for operator feedback

2016-04-20 Thread Nikhil Komawar
Hi all,

At the Austin summit, I've scheduled a Glance work session [1] for
gathering input on Glance deployments and feedback surrounding the same.
Also, I've taken the liberty to propose a few topics related to the same
at the discussion etherpad [2]. These are general discussion items that
you may want to already bring up however, it's not the agenda. We plan
to have a round table, free form session around different flavors and
scales of deployments and help guide the project in a way that may work
best for us all.

Pasting the topics here:

  * What's the strategy used for Glance deployments?
  o How many use public vs. private glance installations?
  o How many use glance registry?
  * Glance DB support
  o What databases are currently being used?
  o Who are using Galera?
  * Nova/Ironic image cache usage, glance image cache, performance,
stability/reliability, etc
  * Deprecation of glance-registry service, any hurdles?

Please note: this is email thread for informational purposes and
feedback on the session scheduling only. If you'd like to discuss one or
more of the items, feel free to create a separate email thread for the
same. If not, we will plan to start threads after the summit when we
have more feedback gathered.

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9280?goback=1
[2] https://etherpad.openstack.org/p/newton-glance-operator-feedback

-- 

Thanks,
Nikhil Komawar
Newton PTL for OpenStack Glance


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-20 Thread Dean Troyer
On Wed, Apr 20, 2016 at 9:43 AM, Doug Hellmann 
wrote:

> Cliff looks for commands on demand. If we modify it's command loader to
> support some "built in" commands, and then implement the commands in OSC
> that way, we can avoid scanning the real plugin system until we hit a
> command that isn't built in.
>

Maybe down the road once this becomes a bigger percentage of the startup
time, for now I do not (yet) believe the plugins are the problem others
do.  See below...


> The last time I proposed that, though, someone (Dean?) pointed out that
> loading the plugins wasn't actually where OSC was spending its time. So,
> maybe we should profile the thing before proposing code changes.
>

It has been a while since I looked into this in detail, we made a couple of
changes then that helped, and since then the world kept moving and we're
behind again. Right now OSC is a mess WRT keystoneclient/keystoneauth and
their interactions with os-client-config.  We have lost sight of who is
supposed to be doing what here.  I know for a fact that there is
duplication in auth setup, we often make duplicate REST auth calls.

OSC 2.4.0 should be released today/tomorrow, following which we begin
merging the cleanup starting with the ksc/ksa bits.

Here are a couple of things to consider for those who want to investigate
further:
* OSC does not load _any_ project client (modulo the ksc bit mentioned
above) unless/until it is actually needed to make a REST call.
* Timing on a help command includes a complete scan of all entry points to
generate the list of commands
* The --timing option lists all REST calls that properly go through our
TimingSession object.  That should be all of them unless a library doesn't
use the session it is given (the ones used by the commands in the OSC repo
all do this correctly).
* Interactive mode can be useful to get timing on just the setup/teardown
process without actually running a command:

  time openstack __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] weekly meeting on apr 21st

2016-04-20 Thread Nikhil Komawar
Hi all,

Last week when I asked if we needed a meeting for this week, the poll
[1] resulted in "maybe". I currently do not see any 'specific' agenda
[2] items posted for this week's meeting. I am assuming everyone is busy
going into the summit and the updates can be shared then or the meeting
after.

If anyone wants to have the meeting, please add your items now. If none
exist by 23:59 UTC today, I will cancel the meeting.

[1]
http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-04-14-14.00.log.html#l-82
[2] https://etherpad.openstack.org/p/glance-team-meeting-agenda

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Jeremy Stanley
On 2016-04-19 11:30:38 -0500 (-0500), Ian Cordasco wrote:
[...]
> I've argued with different downstream distributors about their own
> judgment of what portions of the patch to apply in order to fix an
> issue with an assigned CVE. It took much longer than should have
> been necessary in at least one of those cases where it did affect
> OpenStack
[...]

I won't disagree that it's a double-edged sword, but on balance
having established, organized distros managing security backporting
for their packages helps in a lot more situations of lax upstream
security posture than it hinders responsive upstreams (probably
because there are a lot more of the former than the latter). At
least it's seemed to me that a majority of vulnerability
announcements posted on the oss-sec ML come from distro security
teams as compared to upstream security teams, though this also may
just be due to having a lot more low-popularity projects packaged in
major distros and written by small teams who don't have experience
handling vulnerability reports.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] snapshot tool

2016-04-20 Thread Dmitry Nikishov
Dmitry,

I mean, currently shotgun fetches services' configuration along with
astute.yaml. These files contain passwords, keys, tokens. I beleive, these
should be sanitized. Or, better yet, there should be an option to sanitize
sensitive data from fetched files.


Aleksandr,

Currently Fuel has a service non-root account with passwordless sudo
enabled. This may change in the future (the passwordless part), however,
now I don't see an issue there.
Additionally, it is possible for users to configure sudo for the
user-facing account however they like.

In regards to have this tool to use a non-root accounts, there are 2 items:
- execute commands, that require elevated privileges (the easy part -- user
has to be able to execute these commands with sudo and without password)
- copy files, that this user doesn't have read privileges for.

For the second item, there are 2 possible solutions:
1. Give the non-root user read privileges for these files.
Pros:
- More straightforward, generally acceptable way
Cons:
- Requires additional implementation to give permissions to the user
- (?) Not very extensible: to allow copying a new file, we'd have to first
add it to the tool's config, and somehow implement adding read permissions

2. Somehow allow to copy these files with sudo.
Pros:
- More simple implementation: we'll just need to make sure that the user
can do passwordless sudo
- Extensible: to add more files, it's enough to just specify them in the
tool's configuration.
Cons:
- Non-obvious, obscure way
- Relies on having to be able to do something like "sudo cat
/path/to/file", which is not much better that just giving the user read
privileges. In fact, the only difference between this and giving the user
the read rights is that it is possible to allow "sudo cat" for files, that
don't yet exist, whereas giving permissions requires that these files
already are on the filesystem.

What way do you think is more appropriate?


On Wed, Apr 20, 2016 at 5:28 AM, Aleksandr Dobdin 
wrote:

> Dmitry,
>
> You can create a non-root user account without root privileges but you
> need to add it to appropriate groups and configure sudo permissions (even
> though you add this user to root group, it will fail with iptables command
> for example) to get config files and launch requested commands.I suppose
> that it is possible to note this possibility in the documentation and
> provide a customer with detailed instructions on how to setup this user
> account.There are some logs that will also be missing from the snapshot
> with the message permission denied (only the root user has access to some
> files with 0600 mask)
> This user account could be specified into config.yaml (ssh -> opts option)
>
> Sincerely yours,
> Aleksandr Dobdin
> Senior Operations Engineer
> Mirantis
> ​Inc.​
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dmitry Nikishov,
Deployment Engineer,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] A standards-compliant SFC API

2016-04-20 Thread Armando M.
On 20 April 2016 at 09:31, Duarte Cardoso, Igor <
igor.duarte.card...@intel.com> wrote:

> Dear OpenStack Community,
>
>
>
> We've been investigating options in/around OpenStack for supporting
> Service Function Chaining. The networking-sfc project has made significant
> progress in this space, and we see lots of value in what has been
> completed. However, when we looked at the related IETF specs on SFC we
> concluded that there would be value in further developing an SFC API and
> related classification functionality to enhance the alignment between the
> work in the OpenStack community with the standards work. We would like to
> propose the SFC part as a potential networking-sfc v2 API, but are open to
> other options too based on your feedback.
>
>
>
> I have submitted a spec to the neutron-specs repo [1], where you can check
> what our initial thoughts for this new API are, and provide your feedback
> or questions regarding the same.
>
>
>
> Your thoughts on this are deeply appreciated. We are looking forward to
> having further discussions with everyone interested in giving feedback or
> establishing collaborations during the OpenStack Summit in Austin.
>
>
>
> [1] https://review.openstack.org/#/c/308453
>

Thanks for reaching out.

The networking-sfc initiative so far has been pretty autonomous. The
project has its own launchpad project [1] and its own docs to document APIs
and proposals [2]. During the long journey that Neutron has been through,
we have been adjusting how to manage the project in order to strike a good
balance between development agility, product stability and community needs.
We're always looking forward to improving that balance and this means that
how we track certain initiatives may evolve in the future. For now, it's
probably best to target the mailing list with tag [networking-sfc] (in
addition to neutron), as well as the project noted below.

[1] https://launchpad.net/networking-sfc
[2] http://docs.openstack.org/developer/networking-sfc/


>
> Thank you,
>
> Igor & the Intel OpenStack networking team.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat]informal meetup during summit

2016-04-20 Thread Rico Lin
Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.

After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or some food. Would like to ask if anyone
knows any nice place for this meetup?:)

Also open for other chance for all can go out for a nice dinner and beer.
Right now seems maybe Monday or Friday night could be the best candidate
for this wonderful task, what all think about this? :)



-- 
May The Force of OpenStack Be With You,



*Rico LinChief OpenStack Technologist, inwinSTACK*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO]: landing code faster

2016-04-20 Thread Adam Young

On 04/20/2016 11:44 AM, Dan Prince wrote:

We've had a run of really spotty CI in TripleO. This is making it
really hard to land patches if reviewers aren't online. Specifically we
seem to get better CI results when the queue is less full (nights and
weekends)... often when core reviewers aren't around.

One thing that would help is if core reviews would +2 instead of +1'ing
a patches. If you buy the approach of a gerrit review, the code looks
good, etc. then go on and +2 it. Don't wait for CI to pass before
coming back around to add your final stamp of approval. We all agree
that the tripleo-check jobs should be passing (or have passed once
collectively) before making any final +A to the patch.


Agreed. The rationale for +1 is only if you are a contributor to the 
patch, and someone else has made changes.  +1 Indicates that you are 
happy with the other person's changes.




The case for a core reviewer to +1 a patch is rare I think. If you have
some comments to add but don't want to +2 it then perhaps add those
comments with a +0 (or -1 if you think it needs fixed). Sure there are
some edge cases where +1's are helpful. But if our goal is to land good
code faster I think it would be more helpful to go ahead and +2 and let
the CI results fall where they may.

Dan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][sfc] A standards-compliant SFC API

2016-04-20 Thread Duarte Cardoso, Igor
Dear OpenStack Community,

We've been investigating options in/around OpenStack for supporting Service 
Function Chaining. The networking-sfc project has made significant progress in 
this space, and we see lots of value in what has been completed. However, when 
we looked at the related IETF specs on SFC we concluded that there would be 
value in further developing an SFC API and related classification functionality 
to enhance the alignment between the work in the OpenStack community with the 
standards work. We would like to propose the SFC part as a potential 
networking-sfc v2 API, but are open to other options too based on your feedback.

I have submitted a spec to the neutron-specs repo [1], where you can check what 
our initial thoughts for this new API are, and provide your feedback or 
questions regarding the same.

Your thoughts on this are deeply appreciated. We are looking forward to having 
further discussions with everyone interested in giving feedback or establishing 
collaborations during the OpenStack Summit in Austin.

[1] https://review.openstack.org/#/c/308453

Thank you,
Igor & the Intel OpenStack networking team.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][release] neutron *-aas release notes are not linked.

2016-04-20 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2016-04-20 13:20:40 +0900:
> Hi,
> 
> I noticed Mitaka release notes for neutron *-aas [1,2,3] are not
> referred to from anywhere.
> Neutron has four deliverables (neutron, lbaas, fwaas, vpnaas),
> but only the release note of the main neutron repo is linked.
> 
> Is the right solution to add links to the main neutron release notes?
> Another possible way is to allow multiple release note links in our
> deliverable YAML format,
> but the first one looks easier.
> 
> Thanks,
> Akihiro
> 
> [0] http://docs.openstack.org/releasenotes/neutron/
> [1] http://docs.openstack.org/releasenotes/neutron-lbaas/
> [2] http://docs.openstack.org/releasenotes/neutron-fwaas/
> [3] http://docs.openstack.org/releasenotes/neutron-vpnaas/
> 

This is an issue with the data model we have for the releases repo. I
only included support for one link, not thinking about projects like
Neutron with multi-part deliverables. We can fix this during Newton.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] summit tools

2016-04-20 Thread Neil Jerram
A couple of questions about our Austin-related planning tools...

- Can one's calendar at 
https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25 
be exported as .ics, or otherwise integrated into a wider calendaring 
system?

- Is the app working for anyone else?  All I get is 'Oops - there was an 
error performing this operation' and 'There was a problem loading summit 
information ...'  My phone is a Blackberry, which means I'm asking for 
trouble, but OTOH it has an Android runtime and does successfully run 
several other Android apps.

Thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Fox, Kevin M
So, location of unified api is an interesting topic... Can things be arranged 
in such a way that the abstraction largely exists only on the client in its own 
project?

So, what about a change of ideas

The three pain points in the abstraction are:
 * Authentication
 * Connectivity
 * Differences in COE features/api that need abstracting.

Authentication issues: All OpenStack services use the same creds. Its painful 
if you ever have to use another set. How does a client ask Magnum for creds for 
a COE if it doesn't have some. Its painful if doesn't use keystone tokens. 
Three ways to solve this... 1, add keystone auth support to the COEs and have 
Magnum always configure it. 2, Proxy the COE api through Magnum's api, and deal 
with authentication on the Magnum side using Keystone. 3, Have some kind of way 
to provision users/creds in each of the COE's and have a way to fetch the creds 
easily via Magnum's api.

Connectivity issues: The cloud ops ensure Magnum/Keystone/etc's apis are 
available. The COE's api is potentially in a tenant or private network, where 
it can be difficult to know how to get to properly. If you have two COE's in 
two different tenants both using 10.0.0.0/24 you may run into issues keeping 
them apart. So I can see two potential ways forward here. Again, proxying the 
native COE api requests through Magnum could solve the issue. The other would 
be mandating every COE have a floating ip, though I think there still are 
issues there with clouds with multiple external networks and connectivity (I 
have a cloud with multiple external networks, so I know they are real :)

So, what do folks think about a proxy? I think all three COE API's are rest 
based? If so, they could be grafted into magum api, something like 
/v1/coeproxy//.

A unified client could then use keystone auth and the magnum api to talk to the 
COE, and then all the rest of the Abstraction nasties are all Features/API 
differences and totally on the client to figure out. This can be a non Magnum 
project if its too distasteful.

Thanks,
Kevin




From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, April 19, 2016 10:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

Sure… I can clarify with a few additional thoughts:

.1) I wouldn’t recommend that it be required for the operator to offer
this API. Representing a view of both providing managed services for
private cloud customer-on-premise installations of upstream OpenStack and
as a business owner with responsibility to operate Magnum for internal
usage within my own employer, I would prefer not to have to operate and
service a unified abstraction API that obfuscates all the benefit of
choice of the native COEs, which is the choice being provided to the end
user who is specifically selecting one COE over another when they
instantiate a bay (unless they pick the “Default” operator choice).  Maybe
a unified abstraction API is a separate project?  OpenStack services get
complicated very quickly and try to do too much.  At a minimum, I would
recommend it be an optional API, not required, and any overhead of
database or other necessary service components should be minimized to not
impact operators who do not want to offer it because it negates the point
of COE choice.  My ideal state is it would be a separate project entirely.

.2) I’d like for folks who want the lowest common denominator API to chime
in with why they want it, and whether they need it to be part of Magnum or
not. I don’t intend to argue with folks who want it… I assume their
reasons are justified, but I would want to find out why it needs to be
part of the Magnum API. Offering choice in COEs and then getting out of
the way (which I believe Magnum should do) is at odds with abstracting the
differentiation of the CEO choice via a unified API.  If there aren’t good
arguments for the "why a unified API needs to be integrated in Magnum",
then have it be separate from a code perspective and not required for
running the Magnum service.  When we talk about APIs and whether a service
is supported by one vendor or another, it is generally easiest to think
about the entire API; The API is either supported in its entirety or the
service isn’t compatible with OpenStack.  If some folks believe a lowest
common denominator API should exist, but there aren’t compelling arguments
for why it must be a required part of the Magnum API then we should
probably consider them as separate projects.  At this point, I am not
compelled to be in favor of integrating a unified API in Magnum when doing
so is a fundamentally different direction than the route Magnum has been
headed down which.  By offering choice of COE, and trying not to get in
the way of that, Magnum provides relevant choice of platform to a very
rapidly changing technology landscape.

Thank you for asking 

Re: [openstack-dev] [neutron][release] neutron *-aas release notes are not linked.

2016-04-20 Thread Armando M.
On 20 April 2016 at 00:39, Andreas Jaeger  wrote:

> On 2016-04-20 06:20, Akihiro Motoki wrote:
> > Hi,
> >
> > I noticed Mitaka release notes for neutron *-aas [1,2,3] are not
> > referred to from anywhere.
> > Neutron has four deliverables (neutron, lbaas, fwaas, vpnaas),
> > but only the release note of the main neutron repo is linked.
> >
> > Is the right solution to add links to the main neutron release notes?
> > Another possible way is to allow multiple release note links in our
> > deliverable YAML format,
> > but the first one looks easier.
>
> They should be linked IMHO from
> http://releases.openstack.org/mitaka/index.html - but indeed they are not,
>

Not quite sure what's the best way to handle this, but this is an
inconsistency I found with [1], where entries must be explicitly added to
the networking section of [2].

[1] http://docs.openstack.org/developer/neutron/
[2] http://docs.openstack.org/developer/openstack-projects.html


> Andreas
>
> > Thanks,
> > Akihiro
> >
> > [0] http://docs.openstack.org/releasenotes/neutron/
> > [1] http://docs.openstack.org/releasenotes/neutron-lbaas/
> > [2] http://docs.openstack.org/releasenotes/neutron-fwaas/
> > [3] http://docs.openstack.org/releasenotes/neutron-vpnaas/
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Fox, Kevin M
If the ops are deploying a cloud big enough to run into that problem, I think 
they can deploy a scaled out docker registry of some kind too, that the images 
can point to? Last I looked, it didn't seem very difficult. The native docker 
registry has ceph support now, so if your running ceph for the backend, you can 
put an instance on each controller and have it stateless I think.

Either way you would be hammering some storage service. Either glance or docker 
registry.

Thanks,
Kevin

From: Guz Egor [guz_e...@yahoo.com]
Sent: Tuesday, April 19, 2016 7:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Fox, Kevin M
Subject: Re: [openstack-dev] [Magnum]Cache docker images

Kevin,

I agree this is not ideal solution, but it's probably the best option to deal 
with public cloud "stability" (e.g. we switched to the same model at AWS and
got really good boost in provisioning time and reduce # failures during cluster 
provisioning). And if application need guarantee "fresh" image, it uses
force pull option in Marathon.

---
Egor


From: "Fox, Kevin M" 
To: OpenStack Development Mailing List (not for usage questions) 

Sent: Tuesday, April 19, 2016 1:04 PM
Subject: Re: [openstack-dev] [Magnum]Cache docker images

I'm kind of uncomfortable as an op with the prebundled stuff. how do you 
upgrade things when needed if there is no way to pull updated images from a 
central place?

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Tuesday, April 19, 2016 11:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum]Cache docker images

Eli,

The approach of pre-pulling docker images has a problem. It only works for 
specific docker storage driver. In comparison, the tar file approach is 
portable across different storage drivers.

Best regards,
Hongbin

From: taget [mailto:qiaoliy...@gmail.com]
Sent: April-19-16 4:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum]Cache docker images

hi hello again

I believe you are talking about this bp 
https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
then ignore my previous reply, that may another topic to solve network limited 
problem.

I think you are on the right way to build docker images but this image could 
only bootstrap by cloud-init, without cloud-init
the container image tar file are not loaded at all, but seems this may not be 
the best way.

I'v suggest that may be the best way is we pull docker images while building 
atomic-image. Per my understanding, the
image build process is we mount the image to read/write mode to some tmp 
directory and chroot to to that dircetory,
we can do some custome operation there.

I can do a try on the build progress(guess rpm-ostree should support some hook 
scripts)

On 2016年04月19日 11:41, Eli Qiao wrote:
@wanghua

I think there were some discussion already , check 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry
and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
On 2016年04月19日 10:57, 王华 wrote:
Hi all,

We want to eliminate pulling docker images over the Internet on bay 
provisioning. There are two problems of this approach:
1. Pulling docker images over the Internet is slow and fragile.
2. Some clouds don't have external Internet access.

It is suggested to build all the required images into the cloud images to 
resolved the issue.

Here is a solution:
We export the docker images as tar files, and put the tar files into a dir in 
the image when we build the image. And we add scripts to load the tar files in 
cloud-init, so that we don't need to download the docker images.

Any advice for this solution or any better solution?

Regards,
Wanghua




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (乔立勇)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Swift] Erasure coding and geo replication

2016-04-20 Thread John Dickinson
There's no significant change with the global EC clusters story in the 2.7 
release. That's something we're discussing next week at the summit.

--John



On 19 Apr 2016, at 22:47, Mark Kirkwood wrote:

> Hi,
>
> Has the release of 2.7 significantly changed the assessment here?
>
> Thanks
>
> Mark
>
> On 15/02/16 23:29, Kota TSUYUZAKI wrote:
>> Hello Mark,
>>
>> AFAIK, a few reasons for that we still are in working progress for erasure 
>> code + geo replication.
>>
 and expect to survive a region outage...

 With that I mind I did some experiments (Liberty swift) and it looks to me 
 like if you have:

 - num_data_frags < num_nodes in (smallest) region

 and:

 - num_parity_frags = num_data_frags


 then having a region fail does not result in service outage.
>>
>> Good point but note that the PyECLib v1.0.7 (pinned to Kilo/Liberty stable) 
>> still have a problem which cannot decode the original data when all feed 
>> fragments are parity frags[1]. (i.e. if set
>> num_parity_frags = num_data frags and then, num_parity_frags comes into 
>> proxy for GET request, it will fail at the decoding) The problem was already 
>> resolved in the PyECLib/liberasurecode at master
>> branch and current swift master has the PyECLib>=1.0.7 dependencies so if 
>> you thought to use the newest Swift, it might be not
>> a matter.
>>
>> In the Swift perspective, I think that we need more tests/discussion for geo 
>> replication around write/read affinity[2] which is geo replication stuff in 
>> Swift itself and performances.
>>
>> For the write/read affinity, actually we didn't consider the affinity 
>> control to simplify the implementation until EC landed into Swift master[3] 
>> so I think it's time to make sure how we can use the
>> affinity control with EC but it's not done yet.
>>
>> For the performance perspective, in my experiments, more parities causes 
>> quite performance degradation[4]. To prevent the degradation, I am working 
>> for the spec which makes duplicated copy from
>> data/parity fragments and spread them out into geo regions.
>>
>> To sumurize, we've not done the work yet but we welcome to discuss and 
>> contribute for EC + geo replication anytime, IMO.
>>
>> Thanks,
>> Kota
>>
>> 1: 
>> https://bitbucket.org/tsg-/liberasurecode/commits/a01b1818c874a65d1d1fb8f11ea441e9d3e18771
>> 2: 
>> http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters
>> 3: 
>> http://docs.openstack.org/developer/swift/overview_erasure_code.html#region-support
>> 4: 
>> https://specs.openstack.org/openstack/swift-specs/specs/in_progress/global_ec_cluster.html
>>
>>
>>
>> (2016/02/15 18:00), Mark Kirkwood wrote:
>>> After looking at:
>>>
>>> https://www.youtube.com/watch?v=9YHvYkcse-k
>>>
>>> I have a question (that follows on from Bruno's) about using erasure coding 
>>> with geo replication.
>>>
>>> Now the example given to show why you could/should not use erasure coding 
>>> with geo replication is somewhat flawed as it is immediately clear that you 
>>> cannot set:
>>>
>>> - num_data_frags > num_devices (or nodes) in a region
>>>
>>> and expect to survive a region outage...
>>>
>>> With that I mind I did some experiments (Liberty swift) and it looks to me 
>>> like if you have:
>>>
>>> - num_data_frags < num_nodes in (smallest) region
>>>
>>> and:
>>>
>>> - num_parity_frags = num_data_frags
>>>
>>>
>>> then having a region fail does not result in service outage.
>>>
>>> So my real question is - it looks like it *is* possible to use erasure 
>>> coding in geo replicated situations - however I may well be missing 
>>> something significant, so I'd love some clarification here [1]!
>>>
>>> Cheers
>>>
>>> Mark
>>>
>>> [1] Reduction is disk usage and net traffic looks attractive
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO]: landing code faster

2016-04-20 Thread Dan Prince
We've had a run of really spotty CI in TripleO. This is making it
really hard to land patches if reviewers aren't online. Specifically we
seem to get better CI results when the queue is less full (nights and
weekends)... often when core reviewers aren't around.

One thing that would help is if core reviews would +2 instead of +1'ing
a patches. If you buy the approach of a gerrit review, the code looks
good, etc. then go on and +2 it. Don't wait for CI to pass before
coming back around to add your final stamp of approval. We all agree
that the tripleo-check jobs should be passing (or have passed once
collectively) before making any final +A to the patch.

The case for a core reviewer to +1 a patch is rare I think. If you have
some comments to add but don't want to +2 it then perhaps add those
comments with a +0 (or -1 if you think it needs fixed). Sure there are
some edge cases where +1's are helpful. But if our goal is to land good
code faster I think it would be more helpful to go ahead and +2 and let
the CI results fall where they may.

Dan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic-staging-drivers] Tests at the gates

2016-04-20 Thread Vasyl Saienko
Hello Ironic-staging-drivers team,

At the moment there is no tests for ironic-staging-drivers at the gates.
I think we need to have a simple test that install drivers with theirs
dependencies and ensures that ironic-conductor is able to start.
It may be performed in the following way. Each staging driver contain two
files:

   - python-requirements.txt - file for python libraries
   - other-requirements.sh - script that will install all non-python driver
   requirements.

During devstack installation phase for each driver we launch:

   - pip install -r
   ironic-staging-drivers/ironic-staging-drivers/$driver/python-requirements.txt
   - bash
   ironic-staging-drivers/ironic-staging-drivers/$driver/other-requirements.sh
   - add drivers to enabled_driver list

At the end ironic will try to register a node with some *_ssh driver. So if
it succeed it means that conductor with staging drivers has started
successfully.

The devstack plugin is on review already:
https://review.openstack.org/#/c/299229/

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-20 Thread Doug Hellmann
Excerpts from Steve Baker's message of 2016-04-20 16:38:25 +1200:
> On 20/04/16 06:17, Monty Taylor wrote:
> > On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:
> >> On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:
> >>> On Tue, Apr 19, 2016 at 9:06 AM, Adam Young  wrote:
> >>>
>  I wonder how much of that is Token caching.  In a typical CLI use 
>  patter,
>  a new token is created each time a client is called, with no 
>  passing of a
>  token between services.  Using a session can greatly decrease the 
>  number of
>  round trips to Keystone.
> 
> >>>
> >>> Not as much as you think (or hope?).  Persistent token caching to 
> >>> disk will
> >>> help some, at other expenses though.  Using --timing on OSC will 
> >>> show how
> >>> much time the Identity auth round trip cost.
> >>>
> >>> I don't have current numbers, the last time I instrumented OSC there 
> >>> were
> >>> significant load times for some modules, so we went a good distance to
> >>> lazy-load as much as possible.
> >>>
> >>> What Dan sees WRT a persistent client process, though, is a 
> >>> combination of
> >>> those two things: saving the Python loading and the Keystone round 
> >>> trips.
> >>
> >> The 1.5sec overhead I eliminated doesn't actually have anything todo
> >> with network round trips at all. Even if you turn off all network
> >> services and just run 'openstack ' and let it fail due
> >> to inability to connect it'll still have that 1.5 sec overhead. It
> >> is all related to python runtime loading and work done during module
> >> importing.
> >>
> >> eg run 'unstack.sh' and then compare the main openstack client:
> >>
> >> $ time /usr/bin/openstack server list
> >> Discovering versions from the identity service failed when creating 
> >> the password plugin. Attempting to determine version from URL.
> >> Unable to establish connection to 
> >> http://192.168.122.156:5000/v2.0/tokens
> >>
> >> real0m1.555s
> >> user0m1.407s
> >> sys0m0.147s
> >>
> >> Against my client-as-a-service version:
> >>
> >> $ time $HOME/bin/openstack server list
> >> [Errno 111] Connection refused
> >>
> >> real0m0.045s
> >> user0m0.029s
> >> sys0m0.016s
> >>
> >>
> >> I'm sure there is scope for also optimizing network traffic / round
> >> trips, but I didn't investigate that at all.
> >>
> >>> I have (had!) a version of DevStack that put OSC into a subprocess and
> >>> called it via pipes to do essentially what Dan suggests.  It saves some
> >>> time, at the expense of complexity that may or may not be worth the 
> >>> effort.
> >>
> >> devstack doesn't actually really need any significant changes beyond
> >> making sure $PATH pointed to the replacement client programs and that
> >> the server was running - the latter could be automated as a launch on
> >> demand thing which would limit devstack changes.
> >>
> >> It actually doesn't technically need any devstack change - these
> >> replacement clients could simply be put in some 3rd party git repo
> >> and let developers who want the speed benefit simply put them in
> >> their $PATH before running devstack.
> >>
> >>> One thing missing is any sort of transactional control in the I/O 
> >>> with the
> >>> subprocess, ie, an EOT marker.  I planned to add a -0 option (think 
> >>> xargs)
> >>> to handle that but it's still down a few slots on my priority list.  
> >>> Error
> >>> handling is another problem, and at this point (for DevStack purposes
> >>> anyway) I stopped the investigation, concluding that reliability 
> >>> trumped a
> >>> few seconds saved here.
> >>
> >> For I/O I simply replaced stdout + stderr with a new StringIO handle to
> >> capture the data when running each command, and for error handling I
> >> ensured the exit status was fed back & likewise stderr printed.
> >>
> >> It is more than just a few seconds saved - almost 4 minutes, or
> >> nearly 20% of entire time to run stack.sh on my machine
> >>
> >>
> >>> Ultimately, this is one of the two giant nails in the coffin of 
> >>> continuing
> >>> to persue CLIs in Python.  The other is co-installability. (See that
> >>> current thread on the ML for pain points).  Both are easily solved with
> >>> native-code-generating languages.  Go and Rust are at the top of my
> >>> personal list here...
> >
> > Using entrypoints and plugins in python is slow, so loading them is 
> > slow, as is loading all of the dependent libraries. Those were choices 
> > made for good reason back in the day, but I'm not convinced either are 
> > great anymore.
> >
> > A pluginless CLI that simply used REST calls rather than the 
> > python-clientlibs should be able to launch in get to the business of 
> > doing work in 0.2 seconds - counting time to load and parse 
> > clouds.yaml. That time could be reduced - the time spent in occ 
> > parsing vendor json files is not strictly necessary and certainly 
> > could go faster. It's not as fast as 0.004 seconds, but with 

[openstack-dev] Neutron: No DVR meeting for this week and week after.

2016-04-20 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
We will not be having our regular DVR meeting this week and for next week.

We will resume our meeting on May 4th 2016.

Thanks.

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] VIP addresses and network templates

2016-04-20 Thread Simon Pasquier
Many thanks Alexey! That's exactly the information I needed.
Simon

On Wed, Apr 20, 2016 at 1:19 PM, Aleksey Kasatkin 
wrote:

> Hi Simon,
>
> When network template is in use, network roles to endpoints mapping is
> specified in section "roles" (in the template). So, "default_mapping"
> from network role description is overridden in the network template.
> E.g.:
>
> network_assignments:
> monitoring:
> ep: br-mon
> ...
>
> network_scheme:
> custom:
> roles:
> influxdb_vip: br-mon
> ...
> ...
>
>
> I hope, this helps.
>
> Regards,
>
>
>
> Aleksey Kasatkin
>
>
> On Wed, Apr 20, 2016 at 12:16 PM, Simon Pasquier 
> wrote:
>
>> Hi,
>> I've got a question regarding network templates and VIP. Some of our
>> users want to run the StackLight services (eg Elasticsearch/Kibana and
>> InfluxDB/Grafana servers) on a dedicated network (lets call it
>> 'monitoring'). People use network templates [0] to provision this
>> additional network but how can Nailgun allocate the VIP address(es) from
>> this 'monitoring' network knowing that today the plugins specify the
>> 'management' network [1][2]?
>> Thanks for your help,
>> Simon
>> [0]
>> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>> [1]
>> https://github.com/openstack/fuel-plugin-influxdb-grafana/blob/8976c4869ea5ec464e5d19b387c1a7309bed33f4/network_roles.yaml#L4
>> [2]
>> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/25b79aff9a79d106fc74b33535952d28b0093afb/network_roles.yaml#L2
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Image import: operator plugins

2016-04-20 Thread Flavio Percoco

On 20/04/16 11:39 +0100, stuart.mcla...@hp.com wrote:

Hi,

As part of Glance's image import [1] work we need to define how operators
can run site specific code as part of the import process. For example
an operator may want to perform some site-specific validation on the
image bits.

Note that I'm not so much interested in what we use to do this (ie
we don't need to discuss the merits of taskflow/stevedore). Rather I'm
just interested in what level of abstraction we should provide.

Currently, the closest thing we have to a plugin [2] provides no
abstraction at all. You need to understand the Glance internals to use
it, and are not insulated from any future changes in Glance's internals.
You can do things like update the image state directly (possibly in a racy
or invalid way, leading to a poor experience for users) in your plugin.

To what degree are the following true?

1. An operator should not have to become intimitely familiar with
internal Glance code in order to write a plugin.


I'd say the required knowledge of Glance internals should be taken to the
minimum.


2. The plugin should be as insulated as possible from future changes in
Glance's internals.


Since it allows for external plugins to be written, I'd agree that the public
API should respect all the backwards compatibility policies. Exposing just
what's really necessary would be better.


3. The plugin should have very limited access to what it can touch. It
should not be possible for plugins to cause race conditions by
setting the image state for example. (This is related to 1 and 2).


++

I'd like to note, though, that these races exist not because of the way the task
plugins were create but because the domain model allows for it. To be more
precise, these races exist in Glance too and not only in the tasks.


4. The interface should just require instantiating a class.  We shouldn't
require plugin writers to know implementation details eg how glance logs
(instead, provide a log method), or whether or not we use taskflow.


++


To give an example of an interface of this kind, we could just provide
an image data iterator to the plugin class. Obviously this is a really
incomplete example, but gives an idea of a simple interface which would
allow a plugin to read the image data and perform validation without
knowing/being exposed to Glance's internals.


Just to build up on your example. It's very likely that consumers of this
interface will need to have information about the image's metadata as well.

Thanks for starting this thread,
Flavio


(Note: I'd like to keep the scope of this thread to what kind of
interface/contract a plugin should have, not what we use to implement
it, whether we should be doing this at all etc)

Thanks,

-Stuart

[1] https://review.openstack.org/#/c/232371
[2] https://review.openstack.org/#/c/214810/26/glance/async/flows/ovf_process.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Corey Bryant
On Tue, Apr 19, 2016 at 12:24 PM, Ian Cordasco 
wrote:

>
>
> -Original Message-
> From: Thomas Goirand 
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Date: April 18, 2016 at 17:21:36
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject:  Re: [openstack-dev] [release][requirements][packaging][summit]
> input needed on summit discussion about global requirements
>
> > Hi Doug,
> >
> > I very much welcome opening such a thread before the discussion at the
> > summit, as often, sessions are too short. Taking the time to write
> > things down first also helps having a more constructive discussion.
> >
> > Before I reply to each individual message below, let me attempt to reply
> > to the big picture seen in your etherpad. I was tempted to insert
> > comments on each lines of it, but I'm not sure how this would be
> > received, and probably it's best to attempt to reply more globally.
> >
> > From what I understand, the biggesgt problems you're trying to solve is
> > that managing the global-reqs is really time consuming from the release
> > team point of view, and especially its propagation to individual
> > projects. There's IMO many things that we could do to improve the
> > situation, which would be acceptable from the package maintainers point
> > of view.
> >
> > First of all, from what I could see in the etherpad, I see a lot of
> > release work which I consider not useful for anyone: not for downstream
> > distros, not upstream projects. Mostly, the propagation of the
> > global-requirements.txt to each and every individual Python library or
> > service *for OpenStack maintained libs* could be reviewed. Because 1/
> > distros will always package the highest version available in
> > upper-constraints.txt, and 2/ it doesn't really reflect a reality. As
> > you pointed out, project A may need a new feature from lib X, but
> > project B wont care. I strongly believe that we should leave lower
> > boundaries as a responsibility of individual projects. What important
> > though, is to make sure that the highest version released does work,
> > because that's what we will effectively package.
> >
> > What we can then later on do, at the distribution level, is artificially
> > set the lower bounds of versions to whatever we've just uploaded for a
> > given release of OpenStack. In fact, I've been doing this a lot already.
> > For example, I uploaded Eventlet 0.17.4, and then 0.18.4. There was
> > never anything in the between. Therefore, doing a dependency like:
> >
> > Depends: python-eventlet (>= 0.18.3)
> >
> > makes no sense, and I always pushed:
> >
> > Depends: python-eventlet (>= 0.18.4)
> >
> > as this reflects the reality of distros.
> >
> > If we generalize this concept, then I could push the minimum version of
> > all oslo libs into every single package for a given version of OpenStack.
> >
> > What is a lot more annoying though, is for packages which I do not
> > control directly, and which are used by many other non-OpenStack
> > packages inside the distribution. For example, Django, SQLAlchemy or
> > jQuery, to only name a few.
> >
> > I have absolutely no problem upping the lower bounds for all of
> > OpenStack components aggressively. We don't have gate jobs for the lower
> > bounds of our requirements. If we decide that it becomes the norm, I can
> > generalize and push for doing this even more. For example, after pushing
> > the update of an oslo lib B version X, I could push such requirements
> > everywhere, which in fact, would be a good thing (as this would trigger
> > rebuilds and recheck of all unit tests). Though, all of this would
> > benefit from a lot of automation and checks.
> >
> > On your etherpad, you wrote:
> >
> > "During the lead-up to preparing the final releases, one of the tracking
> > tasks we have is to ensure all projects have synced their global
> > requirements updates. This is another area where we could reduce the
> > burden on the release team."
> >
> > Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
> > service X can use an older version of oslo.utils), so that's not really
> > helpful in any way.
> >
> > You also wrote:
> >
> > "Current ranges in global-requirements are large but most projects do
> > not actively test the oldest supported version (or other versions in
> > between) meaning that the requirement might result in broken packages."
> >
> > Yeah, that's truth, I've seen this and reported a few bugs (the last I
> > have in memory is Neutron requiring SQLA >= 1.0.12). Though that's still
> > very useful hints for package maintainers *for 3rd party libs* (as I
> > wrote, it's less important for OpenStack components). We have a few
> > breakage here and there, but they are hopefully fixes.
> >
> > Though having a single version that projects are allowed to test 

Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-20 Thread Irena Berezovsky
On Wed, Apr 20, 2016 at 4:25 PM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

> Inline update.
>
> On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
>  wrote:
> > On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes  wrote:
> >> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
> [...]
> >> Yes, Nova's conductor gathers information about the requested networks
> >> *before* asking the scheduler where to place hosts:
> >>
> >>
> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
> >>
> >>>  That would require identifying that the port has a "qos_policy_id"
> >>> attached to it, and then, asking neutron for the specific QoS policy
> >>>   [3], then look out for a minimum bandwidth rule (still to be
> defined),
> >>> and extract the required bandwidth from it.
> >>
> >>
> >> Yep, exactly correct.
> >>
> >>> That moves, again some of the responsibility to examine and
> >>> understand external resources to nova.
> >>
> >>
> >> Yep, it does. The alternative is more retries for placement decisions
> >> because accurate decisions cannot be made until the compute node is
> already
> >> selected and the claim happens on the compute node.
> >>
> >>>  Could it make sense to make that part pluggable via stevedore?, so
> >>> we would provide something that takes the "resource id" (for a port in
> >>> this case) and returns the requirements translated to resource classes
> >>> (NIC_BW_KB in this case).
> >>
> >>
> >> Not sure Stevedore makes sense in this context. Really, we want *less*
> >> extensibility and *more* consistency. So, I would envision rather a
> system
> >> where Nova would call to Neutron before scheduling when it has received
> a
> >> port or network ID in the boot request and ask Neutron whether the port
> or
> >> network has any resource constraints on it. Neutron would return a
> >> standardized response containing each resource class and the amount
> >> requested in a dictionary (or better yet, an os_vif.objects.* object,
> >> serialized). Something like:
> >>
> >> {
> >>   'resources': {
> >> '': {
> >>   'NIC_BW_KB': 2048,
> >>   'IPV4_ADDRESS': 1
> >> }
> >>   }
> >> }
> >>
> >
> > Oh, true, that's a great idea, having some API that translates a
> > neutron resource, to scheduling constraints. The external call will be
> > still required, but the coupling issue is removed.
> >
> >
>
>
> I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
> this, and we believe the synthesis of resource usage / scheduling
> constraints from neutron makes sense.
>
> We should probably look into providing those details in a read only
> dictionary during port creation/update/show in general, in that way,
> we would not be adding an extra API call to neutron from the nova
> scheduler to figure out any of those details. That extra optimization
> is something we may need to discuss with the neutron community.
>
What about the caller context?
I believe these details should be  visible for admin user only.

>
>

> >> In the case of the NIC_BW_KB resource class, Nova's scheduler would
> look for
> >> compute nodes that had a NIC with that amount of bandwidth still
> available.
> >> In the case of the IPV4_ADDRESS resource class, Nova's scheduler would
> use
> >> the generic-resource-pools interface to find a resource pool of
> IPV4_ADDRESS
> >> resources (i.e. a Neutron routed network or subnet allocation pool)
> that has
> >> available IP space for the request.
> >>
> >
> > Not sure about the IPV4_ADDRESS part because I still didn't look on
> > how they resolve routed networks with this new framework, but for
> > other constraints makes perfect sense to me.
> >
> >> Best,
> >> -jay
> >>
> >>
> >>> Best regards,
> >>> Miguel Ángel Ajo
> >>>
> >>>
> >>> [1]
> >>>
> >>>
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
> >>> [2] https://bugs.launchpad.net/neutron/+bug/1560963
> >>> [3]
> >>>
> http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Adrian Otto
Hongbin,

Both of approaches you suggested may only work for one binary format. If you 
try to use docker on a different system architecture, the pre-cache of images 
makes it even more difficult to get the correct images built and loaded.

I suggest we take an approach that allows the Baymodel creator to specify a 
docker registry and/or prefix that will determine where docker images are 
pulled from if they are not found in the local cache. That would give cloud 
operators the option to set up such a registry locally and populate it with the 
right images. This approach would also make it easier to customize the Magnum 
setup by tweaking the container images prior to use.

Thanks,

Adrian

On Apr 19, 2016, at 11:58 AM, Hongbin Lu 
> wrote:

Eli,

The approach of pre-pulling docker images has a problem. It only works for 
specific docker storage driver. In comparison, the tar file approach is 
portable across different storage drivers.

Best regards,
Hongbin

From: taget [mailto:qiaoliy...@gmail.com]
Sent: April-19-16 4:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum]Cache docker images

hi hello again

I believe you are talking about this bp 
https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
then ignore my previous reply, that may another topic to solve network limited 
problem.

I think you are on the right way to build docker images but this image could 
only bootstrap by cloud-init, without cloud-init
the container image tar file are not loaded at all, but seems this may not be 
the best way.

I'v suggest that may be the best way is we pull docker images while building 
atomic-image. Per my understanding, the
image build process is we mount the image to read/write mode to some tmp 
directory and chroot to to that dircetory,
we can do some custome operation there.

I can do a try on the build progress(guess rpm-ostree should support some hook 
scripts)

On 2016?04?19? 11:41, Eli Qiao wrote:
@wanghua

I think there were some discussion already , check 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry
and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
On 2016?04?19? 10:57, ?? wrote:
Hi all,

We want to eliminate pulling docker images over the Internet on bay 
provisioning. There are two problems of this approach:
1. Pulling docker images over the Internet is slow and fragile.
2. Some clouds don't have external Internet access.

It is suggested to build all the required images into the cloud images to 
resolved the issue.

Here is a solution:
We export the docker images as tar files, and put the tar files into a dir in 
the image when we build the image. And we add scripts to load the tar files in 
cloud-init, so that we don't need to download the docker images.

Any advice for this solution or any better solution?

Regards,
Wanghua




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (???)

Intel OTC China




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (???)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-20 Thread Miguel Angel Ajo Pelayo
Inline update.

On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
 wrote:
> On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes  wrote:
>> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
[...]
>> Yes, Nova's conductor gathers information about the requested networks
>> *before* asking the scheduler where to place hosts:
>>
>> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
>>
>>>  That would require identifying that the port has a "qos_policy_id"
>>> attached to it, and then, asking neutron for the specific QoS policy
>>>   [3], then look out for a minimum bandwidth rule (still to be defined),
>>> and extract the required bandwidth from it.
>>
>>
>> Yep, exactly correct.
>>
>>> That moves, again some of the responsibility to examine and
>>> understand external resources to nova.
>>
>>
>> Yep, it does. The alternative is more retries for placement decisions
>> because accurate decisions cannot be made until the compute node is already
>> selected and the claim happens on the compute node.
>>
>>>  Could it make sense to make that part pluggable via stevedore?, so
>>> we would provide something that takes the "resource id" (for a port in
>>> this case) and returns the requirements translated to resource classes
>>> (NIC_BW_KB in this case).
>>
>>
>> Not sure Stevedore makes sense in this context. Really, we want *less*
>> extensibility and *more* consistency. So, I would envision rather a system
>> where Nova would call to Neutron before scheduling when it has received a
>> port or network ID in the boot request and ask Neutron whether the port or
>> network has any resource constraints on it. Neutron would return a
>> standardized response containing each resource class and the amount
>> requested in a dictionary (or better yet, an os_vif.objects.* object,
>> serialized). Something like:
>>
>> {
>>   'resources': {
>> '': {
>>   'NIC_BW_KB': 2048,
>>   'IPV4_ADDRESS': 1
>> }
>>   }
>> }
>>
>
> Oh, true, that's a great idea, having some API that translates a
> neutron resource, to scheduling constraints. The external call will be
> still required, but the coupling issue is removed.
>
>


I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
this, and we believe the synthesis of resource usage / scheduling
constraints from neutron makes sense.

We should probably look into providing those details in a read only
dictionary during port creation/update/show in general, in that way,
we would not be adding an extra API call to neutron from the nova
scheduler to figure out any of those details. That extra optimization
is something we may need to discuss with the neutron community.



>> In the case of the NIC_BW_KB resource class, Nova's scheduler would look for
>> compute nodes that had a NIC with that amount of bandwidth still available.
>> In the case of the IPV4_ADDRESS resource class, Nova's scheduler would use
>> the generic-resource-pools interface to find a resource pool of IPV4_ADDRESS
>> resources (i.e. a Neutron routed network or subnet allocation pool) that has
>> available IP space for the request.
>>
>
> Not sure about the IPV4_ADDRESS part because I still didn't look on
> how they resolve routed networks with this new framework, but for
> other constraints makes perfect sense to me.
>
>> Best,
>> -jay
>>
>>
>>> Best regards,
>>> Miguel Ángel Ajo
>>>
>>>
>>> [1]
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
>>> [2] https://bugs.launchpad.net/neutron/+bug/1560963
>>> [3]
>>> http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kolla multinode , multi-region deployment support

2016-04-20 Thread Michal Rostecki
On Wed, 2016-04-20 at 06:57 +, Steven Dake (stdake) wrote:
> Grzegorz,
> 
> This is a technical question about our roadmap and should be sent to
> the openstack-dev mailing list.  As such, ccing openstack-dev so
> everyone can benefit from my thinking on this matter and others can
> weigh in.
> 
> From: Grzegorz Koper  @pl.ibm.com>>
> Date: Monday, April 18, 2016 at 12:41 AM
> To: Steven Dake >
> Cc: Michal Nasiadka  a...@pl.ibm.com>>
> Subject: Kolla multinode , multi-region deployment support
> 
> Hello,
> First of all, thanks for all the work on Kolla project.
> We are currently testing both Kolla Liberty and Mitaka branch ( yes i
> know about the Liberty issues, and upcoming patch ).
> We would love to deploy multinode, multi-region topology but failing
> to find any documentation at the topic.
> 
> Found this https://blueprints.launchpad.net/kolla/+spec/multi-kolla-c
> onfig but it looks forgotten ;]
> Do You have any ideas or plans to implement such a setup in Kolla ?
> Maybe some tips and tricks if we would like to start on this ourselfs
> ?
> 
> Thanks in advance
> 
> 
> 
> I'm sure the team thanks for for the kudos on the impelmentation of
> Kolla!
> 
> Nobody has brought up multi-Azs on the mailing list, in irc, or in
> our weekly irc meeting.  Your correct that blueprint hasn't seen much
> discussion, even though its in the "discussion" phase.
> 
> Not really understanding how Azs are setup, I think it would be a
> fantastic feature and would be pleased to support the development of
> such work.  I don't have any real pointers on where to begin, other
> then we have what I feel is a pretty full plate for Newton already
> and the core team will be stretched thin implementing this with our
> current community of developers.  We do welcome new contributors and
> can help you get your initial deployment setup so you can begin work
> if you desire.  Anyone on the core reviewer team can approve a
> blueprint once an assignee has been determined to lead the
> effort.  All you have to do is ask :)
> 
> Join us on #openstack-kolla to discuss the work, or drop into our
> design sessions on Wednesday/Thursday of summit where we discuss our
> roadmap.
> 
> Regards,
> -steve
> 

I'd like to add that we have to keep in mind also this blueprint: https
://blueprints.launchpad.net/kolla/+spec/multiple-clouds

So, we have to handle both configs per cloud and configs per cloud
region.

My proposition here is to have the following config trees:

/etc/kolla - default config path
/etc/kolla// - where default region is "region_one"

And then add an option "multiregion" or "multicloud" in config to tell
kolla whether it should look at the first path or at the second one.
Then of course support pointing kolla-ansible to the concrete cloud we
want to deploy or upgrade.

I woudn't like to have a separate path for  without
 - I think that supporting only one cloud with many regions
is unnecessary. IMO we shoud be focused only for scenarios for AIO
(first path) and multicloud-multiregion.

However, Steven, I don't know what availability zones have to do with
the multiregion blueprint. AZ is something existing in the single cloud
when you associate hosts, isn't it? And AZ is something which we are
able to use currently in kolla (in contrast to regions or multiple
clouds).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] upgrade support between which versions of ironic?

2016-04-20 Thread Mathieu Mitchell



On 2016-04-19 11:29 PM, Tan, Lin wrote:

I agree this is reasonable to support all these cases in “cold upgrades” but in 
supports-rolling-upgrade (live upgrade in another word) case it is different 
and complicated and not necessary,

During rolling upgrade, we will have old/new services co-existed, and we need 
to make services compatible which need some extra code work and this is the 
main purpose of spec [1]. And as far as I can see, we are  not allowed to skip 
over releases when rolling upgrading.  So my point is support name release is 
enough.

1. Because even if we want to support major number release, admins have to upgrade 
from 5.0 -> 6.0 then 6.0 -> 7.0 in Ruby’s case of 5.0.0, 5.1.0 == Mitaka, 
5.2.0, 6.0.0, 6.1.0, 7.0.0, 7.1.0, 7.2.0 == Newton. And we might have a higher 
release frequency in the future. So it’s too much work for upgrade a service every 
six months.

2. As we usually rolling upgrade the whole cloud, not for ironic only. For 
example, other projects will upgrade from Mitaka to Netwon, there is not much 
sense to upgrade Ironic from 5.0 -> 6.0 only.



As an operator, I disagree with that statement. We follow different 
upgrade paths for Ironic and Glance for example. My area of concern 
around Ironic is compatibility with Nova and Neutron. If we can prove 
via CI that Nova on an older version still works with Ironic on master 
and vice versa, we will successfully avoid having to do them in a lockstep.




At last, we should add a multi-node grenade CI to test the rolling upgrade 
mechanism was not broken. Here is my suggestion, we will have two nodes, Node A 
and Node B. Node A will run both of last named release of ironic-api and 
ironic-conductor and node B will run last named release ironic-conductor only. 
Multi-node grenade CI will only upgrade Node A and then we can test the 
interaction between new SHA of ironic-api and new SHA/last named release of 
ironic-conductor still works.  This should also apply to stable branch.


That's a test case for "live upgrade", +1 :)

Mathieu




B.R

Tan

[1] https://review.openstack.org/299245

From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Wednesday, April 20, 2016 5:12 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] upgrade support between which versions of 
ironic?

Thanks for starting the thread, Ruby.


We need to first establish a grenade job to test "cold upgrades" and assert the 
supports-upgrade tag. I believe Ironic meets all the criteria for that tag except:
- having a job that tests it (so, you know, it might be broken and I might be 
wrong)
- having operator documentation describing the process (it should be here: 
http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html ) but all 
we have are release-specific upgrade notes.

I think all of the scenario you outline are valid upgrade paths for an operator, and we 
should try to allow all of them to work. However, some of them can be covered by one test 
case, and you also missed some things I think need to be covered. Also, I'm interpreting 
the word "master" in your scenarios to indicate the proposed change to our 
master branch, since we do pre-merge testing

So, here are the test cases I think we need to cover:

=== run on proposed changes to master ===

F. current master to new SHA

We need to ensure that we can upgrade master to the code being proposed. You 
listed this last, but I think it's actually the most important one.

D. last named release to new SHA
E. last numbered release to new SHA

Because we cut new releases from master, this is the basis of testing the 
upgrade between sequential (named or numbered) releases before we cut a new 
(named or numbered) release, and is our most important test to ensure that we 
don't break most operators. Based on the user survey, most operators are using 
named releases, so if we are resource constrained, I would prefer to cover (D) 
before (E)

=== run on proposed changes to a stable branch ===

A. stable/N-1 -> new SHA -> [ stable/N+1 or current master]

We don't need to test upgrades between two named releases (eg. Liberty -> 
Mitaka) every time we land a new patch on the master branch, but we do need to 
test any time we land a change on a stable branch. Changes to the most recent 
stable branch should be upgrade-tested to current master, whereas changes to any 
stable branch prior to that should get tested to the subsequent sequential release.

Eg, a backport to stable/liberty should trigger an upgrade test for both (stable/kilo -> 
newpatch) and (newpatch  -> stable/mitaka), whereas a backport to stable/mitaka should 
trigger a test for (stable/liberty -> newpatch) and (newpatch -> master)


Once we've done that, then yes, I agree we should also work towards asserting 
supports-rolling-upgrade. That will require a partial upgrade job, eg. where we 
run >1 instance of the API and 

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-20 Thread Miguel Angel Ajo Pelayo
I think this is an interesting topic.

What do you mean exactly by FC ? (feature chaining?)

I believe we have three things to look at:  (sorry for the TL)

1) The generalization of traffic filters / traffic classifiers. Having
common models, some sort of common API or common API structure
available, and translators to convert those filters to iptables,
openflow filters, etc..

2) The enhancement of extensiblity of agents via Extension API.

3) How we chain features in OpenFlow, which current approach of just
inserting rules, renders into incompatible extensions. This becomes
specially relevant for the new openvswitch firewall.

2 and 3 are interlinked, and a good mechanism to enhance (3) should be
provided in (2).

We need to resolve:

a) The order of tables, and how openflow actions chain the
different features in the pipeline.  Some naive thinking brings me
into the idea that we need to identify different input/output stages
of packet processing, and every feature/extension declares the point
where it needs to be. And then when we have all features, every
feature get's it's own table number, and the "next" action in
pipeline.

b) We need to have a way to request openflow registers to use in
extensions, so one extension doesn't overwrite other's registers

   c) Those registers need to be given a logical names that other
extensions can query for (for example "port_number", "local_zone",
etc..) , and those standard registers should be filled in for all
extensions at the input stage.

   and probably c,d,e,f,g,h what I didn't manage to think of.

On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang  wrote:
> Hi Reedip,
>
>
>
> Sure will include you in the discussion. Let me know if there are other
> Tap-as-a-Service members who would like to join this initiative.
>
>
>
> Cathy
>
>
>
> From: reedip banerjee [mailto:reedi...@gmail.com]
> Sent: Thursday, April 14, 2016 7:03 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and
> OVS Agent extension for Newton cycle
>
>
>
> Speaking on behalf of Tap-as-a-Service members, we would also be very much
> interested in the following initiative :)
>
>
>
> On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka 
> wrote:
>
> Cathy Zhang  wrote:
>
>
> I think there is no formal spec or anything, just some emails around there.
>
> That said, I don’t follow why it’s a requirement for SFC to switch to l2
> agent extension mechanism. Even today, with SFC maintaining its own agent,
> there are no clear guarantees for flow priorities that would avoid all
> possible conflicts.
>
> Cathy> There is no requirement for SFC to switch. My understanding is that
> current L2 agent extension does not solve the conflicting entry issue if two
> features inject the same priority table entry. I think this new L2 agent
> effort is try to come up with a mechanism to resolve this issue. Of course
> if each feature( SFC or Qos) uses its own agent, then there is no
> coordination and no way to avoid conflicts.
>
>
> Sorry, I probably used misleading wording. I meant, why do we consider the
> semantic flow management support in l2 agent extension framework a
> *prerequisite* for SFC to switch to l2 agent extensions? The existing
> framework should already allow SFC to achieve what you have in the
> subproject tree implemented as a separate agent (essentially a fork of OVS
> agent). It will also set SFC to use standard extension mechanisms instead of
> hacky inheritance from OVS agent classes. So even without the strict
> semantic flow management, there is benefit for the subproject.
>
> With that in mind, I would split this job into 3 pieces:
> * first, adopt l2 agent extension mechanism for SFC functionality (dropping
> custom agent);
> * then, work on semantic flow management support in OVS agent API class [1];
> * once the feature emerges, switch SFC l2 agent extension to the new
> framework to manage SFC flows.
>
> I would at least prioritize the first point and target it to Newton-1. Other
> bullet points may take significant time to bake.
>
> [1]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py
>
>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Thanks and Regards,
> Reedip Banerjee
>
> IRC: reedip
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-20 Thread Miguel Angel Ajo Pelayo
Sorry, I just saw, FC = flow classifier :-), I made it a multi purpose
abrev. now ;)

On Wed, Apr 20, 2016 at 2:12 PM, Miguel Angel Ajo Pelayo
 wrote:
> I think this is an interesting topic.
>
> What do you mean exactly by FC ? (feature chaining?)
>
> I believe we have three things to look at:  (sorry for the TL)
>
> 1) The generalization of traffic filters / traffic classifiers. Having
> common models, some sort of common API or common API structure
> available, and translators to convert those filters to iptables,
> openflow filters, etc..
>
> 2) The enhancement of extensiblity of agents via Extension API.
>
> 3) How we chain features in OpenFlow, which current approach of just
> inserting rules, renders into incompatible extensions. This becomes
> specially relevant for the new openvswitch firewall.
>
> 2 and 3 are interlinked, and a good mechanism to enhance (3) should be
> provided in (2).
>
> We need to resolve:
>
> a) The order of tables, and how openflow actions chain the
> different features in the pipeline.  Some naive thinking brings me
> into the idea that we need to identify different input/output stages
> of packet processing, and every feature/extension declares the point
> where it needs to be. And then when we have all features, every
> feature get's it's own table number, and the "next" action in
> pipeline.
>
> b) We need to have a way to request openflow registers to use in
> extensions, so one extension doesn't overwrite other's registers
>
>c) Those registers need to be given a logical names that other
> extensions can query for (for example "port_number", "local_zone",
> etc..) , and those standard registers should be filled in for all
> extensions at the input stage.
>
>and probably c,d,e,f,g,h what I didn't manage to think of.
>
> On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang  
> wrote:
>> Hi Reedip,
>>
>>
>>
>> Sure will include you in the discussion. Let me know if there are other
>> Tap-as-a-Service members who would like to join this initiative.
>>
>>
>>
>> Cathy
>>
>>
>>
>> From: reedip banerjee [mailto:reedi...@gmail.com]
>> Sent: Thursday, April 14, 2016 7:03 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and
>> OVS Agent extension for Newton cycle
>>
>>
>>
>> Speaking on behalf of Tap-as-a-Service members, we would also be very much
>> interested in the following initiative :)
>>
>>
>>
>> On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka 
>> wrote:
>>
>> Cathy Zhang  wrote:
>>
>>
>> I think there is no formal spec or anything, just some emails around there.
>>
>> That said, I don’t follow why it’s a requirement for SFC to switch to l2
>> agent extension mechanism. Even today, with SFC maintaining its own agent,
>> there are no clear guarantees for flow priorities that would avoid all
>> possible conflicts.
>>
>> Cathy> There is no requirement for SFC to switch. My understanding is that
>> current L2 agent extension does not solve the conflicting entry issue if two
>> features inject the same priority table entry. I think this new L2 agent
>> effort is try to come up with a mechanism to resolve this issue. Of course
>> if each feature( SFC or Qos) uses its own agent, then there is no
>> coordination and no way to avoid conflicts.
>>
>>
>> Sorry, I probably used misleading wording. I meant, why do we consider the
>> semantic flow management support in l2 agent extension framework a
>> *prerequisite* for SFC to switch to l2 agent extensions? The existing
>> framework should already allow SFC to achieve what you have in the
>> subproject tree implemented as a separate agent (essentially a fork of OVS
>> agent). It will also set SFC to use standard extension mechanisms instead of
>> hacky inheritance from OVS agent classes. So even without the strict
>> semantic flow management, there is benefit for the subproject.
>>
>> With that in mind, I would split this job into 3 pieces:
>> * first, adopt l2 agent extension mechanism for SFC functionality (dropping
>> custom agent);
>> * then, work on semantic flow management support in OVS agent API class [1];
>> * once the feature emerges, switch SFC l2 agent extension to the new
>> framework to manage SFC flows.
>>
>> I would at least prioritize the first point and target it to Newton-1. Other
>> bullet points may take significant time to bake.
>>
>> [1]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py
>>
>>
>>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> --
>>
>> Thanks and Regards,

[openstack-dev] [telemetry][nova] Versioned notification with JSON schema

2016-04-20 Thread Balázs Gibizer
Hi, 

Just want to give telemetry a heads up that top of the versioned notification
transformation in nova [1] we are planning to provide JSON schemas for the
versioned notifications [2]. I hope telemetry project has a view how they want
to consume those schemas.

Any comments are welcome in the specs!

Cheers,
Gibi

[1] https://review.openstack.org/#/c/286675/ 
[2] https://review.openstack.org/#/c/304454/ 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-20 Thread Sean Dague
On 04/19/2016 11:03 PM, Dean Troyer wrote:
> 
> 
> On Tue, Apr 19, 2016 at 8:17 PM, Adam Young  > wrote:
> 
> Maybe it is time to revamp Devstack.  Is there some way that,
> without a major rewrite, it could take better advantage of the CLI?
> Could we group commands, or migrate sections to python scripts that
> really all need to be done together?  For example, most of the early
> prep of the Keystone server moved to keystone-manage bootstrap.  Is
> there more bootstrap-type behavior we can and should consolidate?
> 
> 
> This is what I was talking about, trying to take advantage of the
> interactive mode that also reads from stdin to do a series of comamnds
> with a single load/auth cycle.  It lacks a LOT of things for a resilient
> use case such as DevStack (error abort or error ignore?, branching,
> everything a DSL would bring).
> 
> And if you'd like to replace stach.sh with stack.py, I'll not stop you,
> just don't call it DevStack.  Now you are building yet another
> deployment tool.  We've also been down that road before. It may well be
> time to retire DevStack, be sure to let us know when those willing to
> sponsor that work show up so they can attempt to learn from some of our
> mistakes and not repeat them the hard way.

I agree that the CLI being slow is an issue. It's an issue that hits all
the developers because it's adding 3 minutes to devstack runs.

We've stated that openstack client is our strategic interface to lead
with. We've also all experienced that it's so terribly slow for a CLI,
that it leaves a bad taste in our mouths.

While there are a lot of things that Devstack could do better
(especially when it comes to loading all keystone data (users / sc
entries), which is the majority of the time spend in osc), it does seem
to paper over a real issue that doesn't seem to be priority #1 for OSC
right now (or any of our CLIs).

So, could we get back to the root question.

What's actually taking the time? Can that be improved? All these
assumptions that openstacksdk or occ make things better make assumptions
they aren't loading as much python code or have dynamic entry points
that contribute to the slowness. There seems to be a lot of assumptions
here, and only Dan brought real data to the table.

So the real question is:

1) is anyone sufficiently motivated to do a data driven analysis (and
propose guidelines to addressing) why our python CLIs are slow?

Dan provided a starting point, but I've seen no one actually volunteer
to complete this work, or decide it's a project priority.

All the statements here of "use Lang Foo", "use Library X instead" are
pretty shoot from the hip with no actual data. Yes, there are
fundamental issues with python module loading.  These are problems that
can be solved if people are willing to do the hard work to profile and
limit the extensibility of the code.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] API features discoverability

2016-04-20 Thread Duncan Thomas
On 19 April 2016 at 23:42, Michał Dulko  wrote:

> On 04/18/2016 09:17 AM, Ramakrishna, Deepti wrote:
> > Hi Michal,
> >
> > This seemed like a good idea when I first read it. What more, the server
> code for extension listing [1] does not do any authorization, so it can be
> used for any logged in user.
> >
> > However, I don't know if requiring the admin to manually disable an
> extension is practical. First, admins can always forget to do that. Second,
> even if they wanted to, it is not clear how they could disable specific
> extensions. I assume they would need to edit the cinder.conf file. This
> file currently lists the set of extensions to load as
> cinder.api.contrib.standard_extensions. The server code [2] implements this
> by walking the cinder/api/contrib directory and loading all discovered
> extensions. How is it possible to subtract just one extension from the
> "standard extensions"? Also, system capabilities and extensions may not
> have a 1:1 relationship in general.
>
> Good point, to make that a standard for Cinder API feature discovery we
> would still need to make that more admin-friendly. This also implies
> that probably no admin is actually caring about setting the set of
> extensions correctly.
>

Certainly no no admins - the HP public cloud disabled a bunch of extensions
on the public endpoint for example - but it isn't something we can rely on.


> > Having a new extension API (as proposed by me in [3]) for returning the
> available services/functionality does not have the above problems. It will
> dynamically check the existence of the cinder-backup service, so it does
> not need manual action from admin. I have published a BP [4] related to
> this. Can you please comment on that?
>
> Yes, but I don't think you can run away from setting things manually.
> For example CGs are supported only for certain backends. This set of
> features should also be discoverable. Anyway I think the spec makes sense.
>

Volume type feature discovery is different (but related) to API feature
discovery.

This is unfortunately going against the recent efforts of standardizing
> how OpenStack works between deployments. In Cinder we have API features
> that may or may not be available in different installations. This
> certainly isn't addressed by microversions efforts, which may seem
> related. My feeling is that this goes beyond Cinder and hits a more
> general topic of API discoverability. I think that we should seek the
> API WG advice in that matter. Do we have other OpenStack project
> suffering from similar issue?
>
>
It's a nice aim to have clouds be entirely consistent, but then you're left
with the lowest common denominator. Replication and CG support in cinder
are both valuable to a subset of users, and extremely difficult to make
universal (I'm still hoping somebody can tell me why CGs at the hypervisor
are impossible to get right FWIW). Neutron is likely to be the largest
example of differentiated features, and manilla has some too.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-20 Thread Duncan Thomas
On 20 April 2016 at 08:08, koshiya maho  wrote:


> This design was discussed, reviewed and approved in cross-projects [1] and
> already implemented in nova, cinder and neutron.
> At this point if we change the implementation then it will not be
> consistent across core OpenStack projects.
> For maintenance of the whole of OpenStack, I think that the present method
> is best.
> Please suggest.
>

The fact that a cross-project spec is approved doesn't mean that it will
end up being practical. If the cinder-client implementation had been found
to break any none-trivial users then I wouldn't have hesitated.

Cross project specs are not getting massive amounts of detailed attention
from project teams, end even they were it is not possible to foresee all
subtle problems at review time - they should be taken as guidance not
gospel and expect to be reworked if it proves necessary.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] VIP addresses and network templates

2016-04-20 Thread Aleksey Kasatkin
Hi Simon,

When network template is in use, network roles to endpoints mapping is
specified in section "roles" (in the template). So, "default_mapping" from
network role description is overridden in the network template.
E.g.:

network_assignments:
monitoring:
ep: br-mon
...

network_scheme:
custom:
roles:
influxdb_vip: br-mon
...
...


I hope, this helps.

Regards,



Aleksey Kasatkin


On Wed, Apr 20, 2016 at 12:16 PM, Simon Pasquier 
wrote:

> Hi,
> I've got a question regarding network templates and VIP. Some of our users
> want to run the StackLight services (eg Elasticsearch/Kibana and
> InfluxDB/Grafana servers) on a dedicated network (lets call it
> 'monitoring'). People use network templates [0] to provision this
> additional network but how can Nailgun allocate the VIP address(es) from
> this 'monitoring' network knowing that today the plugins specify the
> 'management' network [1][2]?
> Thanks for your help,
> Simon
> [0]
> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
> [1]
> https://github.com/openstack/fuel-plugin-influxdb-grafana/blob/8976c4869ea5ec464e5d19b387c1a7309bed33f4/network_roles.yaml#L4
> [2]
> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/25b79aff9a79d106fc74b33535952d28b0093afb/network_roles.yaml#L2
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-20 Thread Dina Belova
Folks,

I think Ann's report is super cool and 100% worth publishing on OpenStack
performance-docs .
This is really good information to share community-wide.

Ann, please think if you would like to contribute to performance
documentation.

Cheers,
Dina

On Wed, Apr 20, 2016 at 12:34 PM, Anna Kamyshnikova <
akamyshnik...@mirantis.com> wrote:

> Unfortunately, I won't attend summit in Austin, that is why I decided to
> present these results in the mailing list instead.
>
> On Tue, Apr 19, 2016 at 7:29 PM, Edgar Magana 
> wrote:
>
>> Is there any session presenting these results during the Summit? It will
>> be awesome to have a session on this. I could extend the invite to the Ops
>> Meet-up. We have a section on lighting talks where the team will be very
>> intesreted in learning from your testing.
>>
>> Edgar
>>
>> From: Anna Kamyshnikova 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Tuesday, April 19, 2016 at 5:30 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [Neutron] L3 HA testing on scale
>>
>> >I would definitely like to see how these results are effected by
>> >https://review.openstack.org/#/c/305774/ but understandably 49
>> >physical nodes are hard to come by.
>>
>> Yes, I'm planning to check how situation will change with all recent
>> fixes, but I will be able to do this in May or later.
>>
>> >About testing on scale it’s not so problematic because of the Cloud For
>> All project.
>> >Here [1] you can request for a multi node cluster which you can use to
>> >perform tests. Exact requirements are specified on that website.
>>
>> [1] http://osic.org
>>
>> Thanks for pointing this!
>>
>> >It's a great report, thanks for sharing that! Do you plan to run similar
>> >scale tests on other scenarios e.g. dvr?
>>
>> Thanks! I have testing L3 HA + DVR in plans.
>>
>> P. S.
>>
>> I've updated environment description in report with some details.
>>
>> On Tue, Apr 19, 2016 at 12:52 PM, Rossella Sblendido > > wrote:
>>
>>>
>>>
>>> On 04/18/2016 04:15 PM, Anna Kamyshnikova wrote:
>>> > Hi guys!
>>> >
>>> > As a developer I use Devstack or multinode OpenStack installation (4-5
>>> > nodes) for work, but these are "abstract" environments, where you are
>>> > not able to perform some scenarios as your machine is not powerful
>>> > enough. But it is really important to understand the issues that real
>>> > deployments have.
>>> >
>>> > Recently I've performed testing of L3 HA on the scale environment 49
>>> > nodes (3 controllers, 46 computes) Fuel 8.0. On this environment I ran
>>> > shaker and rally tests and also performed some
>>> > manual destructive scenarios. I think that this is very important to
>>> > share these results. Ideally, I think that we should collect statistics
>>> > for different configurations each release to compare and check it to
>>> > make sure that we are heading the right way.
>>> >
>>> > The results of shaker and rally tests [1]. I put detailed report in
>>> > google doc [2]. I would appreciate all comments on these results.
>>>
>>> It's a great report, thanks for sharing that! Do you plan to run similar
>>> scale tests on other scenarios e.g. dvr?
>>>
>>> Rossella
>>>
>>> >
>>> > [1] - http://akamyshnikova.github.io/neutron-benchmark-results/
>>> > [2]
>>> > -
>>> https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing
>>> >
>>> > Regards,
>>> > Ann Kamyshnikova
>>> > Mirantis, Inc
>>> >
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Regards,
>> Ann Kamyshnikova
>> Mirantis, Inc
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [tripleo][heat] Summit session clashes

2016-04-20 Thread Ethan Lynn
How about move functional tests to wed 3:30? If not, I will sync with you guys 
later :)

Best Regards,
Ethan Lynn
xuanlangj...@gmail.com




> On Apr 20, 2016, at 18:46, Thomas Herve  wrote:
> 
> On Wed, Apr 20, 2016 at 12:14 PM, Ethan Lynn  wrote:
>> Could we move Functional Tests to Thursday? I have a hands-on workshop at
>> wed 4:30-6:00 pm.
> 
> Sorry, I don't have anything to switch it with.
> 
> -- 
> Thomas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] http response code

2016-04-20 Thread Shinobu Kinjo
This might be answer for your question.

https://github.com/openstack/tricircle/blob/master/tricircle/api/controllers/pod.py

Cheers,
S

On Wed, Apr 20, 2016 at 6:37 PM, 李戈  wrote:

> Hi
> I read api source code recently and have a question. Do we uniform the
> "http response code"?
>
>
> such as, 404 means "Not Found".
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Email:
shin...@linux.com
GitHub:
shinobu-x 
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Summit session clashes

2016-04-20 Thread Thomas Herve
On Wed, Apr 20, 2016 at 12:14 PM, Ethan Lynn  wrote:
> Could we move Functional Tests to Thursday? I have a hands-on workshop at
> wed 4:30-6:00 pm.

Sorry, I don't have anything to switch it with.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] [glance] Image import: operator plugins

2016-04-20 Thread stuart . mclaren

Hi,

As part of Glance's image import [1] work we need to define how operators
can run site specific code as part of the import process. For example
an operator may want to perform some site-specific validation on the
image bits.

Note that I'm not so much interested in what we use to do this (ie
we don't need to discuss the merits of taskflow/stevedore). Rather I'm
just interested in what level of abstraction we should provide.

Currently, the closest thing we have to a plugin [2] provides no
abstraction at all. You need to understand the Glance internals to use
it, and are not insulated from any future changes in Glance's internals.
You can do things like update the image state directly (possibly in a racy
or invalid way, leading to a poor experience for users) in your plugin.

To what degree are the following true?

1. An operator should not have to become intimitely familiar with
internal Glance code in order to write a plugin.
2. The plugin should be as insulated as possible from future changes in
Glance's internals.
3. The plugin should have very limited access to what it can touch. It
should not be possible for plugins to cause race conditions by
setting the image state for example. (This is related to 1 and 2).
4. The interface should just require instantiating a class.  We shouldn't
require plugin writers to know implementation details eg how glance logs
(instead, provide a log method), or whether or not we use taskflow.

To give an example of an interface of this kind, we could just provide
an image data iterator to the plugin class. Obviously this is a really
incomplete example, but gives an idea of a simple interface which would
allow a plugin to read the image data and perform validation without
knowing/being exposed to Glance's internals.

(Note: I'd like to keep the scope of this thread to what kind of
interface/contract a plugin should have, not what we use to implement
it, whether we should be doing this at all etc)

Thanks,

-Stuart

[1] https://review.openstack.org/#/c/232371
[2] https://review.openstack.org/#/c/214810/26/glance/async/flows/ovf_process.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >