Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-07-17 Thread Ian Wienand
On 07/13/2018 06:38 AM, Thomas Goirand wrote:
> Now, both Debian and Ubuntu have Python 3.7. Every package which I
> upload in Sid need to support that. Yet, OpenStack's CI is still
> lagging with Python 3.5.

OpenStack's CI is rather broad -- I'm going to assume we're talking
about whole-system devstack-ish based functional tests.  Yes most
testing is on Xenial and hence Python 3.5

We have Python 3.6 available via Bionic nodes.  I think current talk
is to look at mass-updates after the next release.  Such updates, from
history, are fairly disruptive.

> I'm aware that there's been some attempts in the OpenStack infra to
> have Debian Sid (which is probably the distribution getting the
> updates the faster).

We do not currently build Debian sid images, or mirror the unstable
repos or do wheel builds for Debian.  diskimage-builder also doesn't
test it in CI.  This is not to say it can't be done.

> If it cannot happen with Sid, then I don't know, choose another
> platform, and do the Python 3-latest gating...

Fedora has been consistently updated in OpenStack Infra for many
years.  IMO, and from my experience, six-monthly-ish updates are about
as frequent as can be practically handled.

The ideal is that a (say) Neutron dev gets a clear traceback from a
standard Python error in their change and happily fixes it.  The
reality is probably more like this developer gets a tempest
failure due to nova failing to boot a cirros image, stemming from a
detached volume due to a qemu bug that manifests due to a libvirt
update (I'm exaggerating, I know :).

That sort of deeply tangled platform issue always exists; however it
is armortised across the lifespan of the testing.  So several weeks
after we update all these key components, a random Neutron dev can be
pretty sure that submitting their change is actually testing *their*
change, and not really a defacto test of every other tangentially
related component.

A small, but real example; uwsgi wouldn't build with the gcc/glibc
combo on Fedora 28 for two months after its release until uwsgi's
2.0.17.1.  Fedora carried patches; but of course there were a lot
previously unconsidered assumptions in devstack around deployment that
made using the packaged versions difficult [1] (that stack still
hasn't received any reviews).

Nobody would claim diskimage-builder is the greatest thing ever, but
it does produce our customised images in a wide variety of formats
that runs in our very heterogeneous clouds.  It's very reactive -- we
don't know about package updates until they hit the distro, and
sometimes that breaks assumptions.  It's largely taken for granted in
our CI, but it takes a constant sustained effort across the infra team
to make sure we have somewhere to test.

I hear myself sounding negative, but I think it's a fundamental
problem.  You can't be dragging in the latest of everything AND expect
that you won't be constantly running off fixing weird things you never
even knew existed.  We can (and do) get to the bottom of these things,
but if the platform changes again before you've even fixed the current
issue, things start piling up.

If the job is constantly broken it gets ignored -- if a non-voting
job fails in the woods, does it make a sound? :)

> When this happens, moving faster with Python 3 versions will be
> mandatory for everyone, not only for fools like me who made the
> switch early.

This is a long way of saying that - IMO - the idea of putting out a
Debian sid image daily (to a lesser degree Buster images) and throwing
a project's devstack runs against it is unlikely to produce a good
problems-avoided : development-resources ratio.  However, prove me
wrong :)

If people would like to run their master against Fedora (note
OpenStack's stable branch lifespan is generally longer than a given
Fedora release is supported, so it is not much good there) you have
later packages, but still a fairly practical 6-month-ish stability
cadence.  I'm happy to help (some projects do already).

> 

With my rant done :) ... there's already discussion around multiple
python versions, containers, etc in [2].  While I'm reserved about the
idea of full platform functional tests, essentially having a
wide-variety of up-to-date tox environments using some of the methods
discussed there is, I think, a very practical way to be cow-catching
some of the bigger issues with Python version updates.  If we are to
expend resources, my 2c worth is that pushing in that direction gives
the best return on effort.

-i

[1] https://review.openstack.org/#/c/565923/
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132152.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Zane Bitter

On 17/07/18 10:44, Thierry Carrez wrote:

Finally found the time to properly read this...


For anybody else who found the wall of text challenging, I distilled the 
longest part into a blog post:


https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations


Zane Bitter wrote:

[...]
We chose to add features to Nova to compete with vCenter/oVirt, and 
not to add features the would have enabled OpenStack as a whole to 
compete with more than just the compute provisioning subset of 
EC2/Azure/GCP.


Could you give an example of an EC2 action that would be beyond the 
"compute provisioning subset" that you think we should have built into 
Nova ?


Automatic provision/rotation of application credentials.
Reliable, user-facing event notifications.
Collection of usage data suitable for autoscaling, billing, and whatever 
it is that Watcher does.


Meanwhile, the other projects in OpenStack were working on building 
the other parts of an AWS/Azure/GCP competitor. And our vague 
one-sentence mission statement allowed us all to maintain the delusion 
that we were all working on the same thing and pulling in the same 
direction, when in truth we haven't been at all.


Do you think that organizing (tying) our APIs along [micro]services, 
rather than building a sanely-organized user API on top of a 
sanely-organized set of microservices, played a role in that divide ?


TBH, not really. If I were making a list of contributing factors I would 
probably put 'path dependence' at #1, #2 and #3.


At the start of this discussion, Jay posted on IRC a list of things that 
he thought shouldn't have been in the Nova API[1]:


- flavors
- shelve/unshelve
- instance groups
- boot from volume where nova creates the volume during boot
- create me a network on boot
- num_instances > 1 when launching
- evacuate
- host-evacuate-live
- resize where the user 'confirms' the operation
- force/ignore host
- security groups in the compute API
- force delete server
- restore soft deleted server
- lock server
- create backup

Some of those are trivially composable in higher-level services (e.g. 
boot from volume where nova creates the volume, get me a network, 
security groups). I agree with Jay that in retrospect it would have been 
cleaner to delegate those to some higher level than the Nova API (or, 
equivalently, for some lower-level API to exist within what is now 
Nova). And maybe if we'd had a top-level API like that we'd have been 
more aware of the ways that the lower-level ones lacked legibility for 
orchestration tools (oaktree is effectively an example of a top-level 
API like this, I'm sure Monty can give us a list of complaints ;)


But others on the list involve operations at a low level that don't 
appear to me to be composable out of simpler operations. (Maybe Jay has 
a shorter list of low-level APIs that could be combined to implement all 
of these, I don't know.) Once we decided to add those features, it was 
inevitable that they would reach right the way down through the stack to 
the lowest level.


There's nothing _organisational_ stopping Nova from creating an internal 
API (it need not even be a ReST API) for the 'plumbing' parts, with a 
separate layer that does orchestration-y stuff. That they're not doing 
so suggests to me that they don't think this is the silver bullet for 
managing complexity.


What would have been a silver bullet is saying 'no' to a bunch of those 
features, preferably starting with 'restore soft deleted server'(!!) and 
shelve/unshelve(?!). When AWS got feature requests like that they didn't 
say 'we'll have to add that in a higher-level API', they said 'if your 
application needs that then cloud is not for you'. We were never 
prepared to say that.


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33


We can decide that we want to be one, or the other, or both. But if we 
don't all decide together then a lot of us are going to continue 
wasting our time working at cross-purposes.


If you are saying that we should choose between being vCenter or AWS, I 
would definitely say the latter.


Agreed.

But I'm still not sure I see this issue 
in such a binary manner.


I don't know that it's still a viable option to say 'AWS' now. Given our 
installed base of users and our commitment to not breaking them, our 
practical choices may well be between 'vCenter' or 'both'.


It's painful because had we chosen 'AWS' at the beginning then we could 
have avoided the complexity hit of many of those features listed above, 
and spent our complexity budget on cloud features instead. Now we are 
locked in to supporting that legacy complexity forever, and it has 
reportedly maxed out our complexity budget to the point where people are 
reluctant to implement any cloud features, and unable to refactor to 
make them easier.


Astute observers will note that this is a *textbook* case of the 
Innovator's Dilemma.


Imagine 

Re: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project!

2018-07-17 Thread Adrian Turjak
Thanks!

As the current project lead for Adjutant I welcome the news, and while I
know it wasn't an easy process would like to thank everyone involved in
the voting. All the feedback (good and bad) will be taken on board to
make the service as suited for OpenStack as possible in the space we've
decided it can fit.

Now to onboarding, choosing a suitable service type, and preparing for a
busy Stein cycle!

- Adrian


On 18/07/18 05:52, Doug Hellmann wrote:
> The Adjutant team's application [1] to become an official project
> has been approved. Welcome!
>
> As I said on the review, because it is past the deadline for Rocky
> membership, Adjutant will not be considered part of the Rocky
> release, but a future release can be part of Stein.
>
> The team should complete the onboarding process for new projects,
> including holding PTL elections for Stein, setting up deliverable
> files in the openstack/releases repository, and adding meeting
> information to eavesdrop.openstack.org.
>
> I have left a comment on the patch setting up the Stein election
> to ask that the Adjutant team be included.  We can also add Adjutant
> to the list of projects on docs.openstack.org for Stein, after
> updating your publishing job(s).
>
> Doug
>
> [1] https://review.openstack.org/553643
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Sean McGinnis
On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> Hi Cinder and Nova folks,
> 
> Working on some tests for our drivers, I stumbled upon this tempest test
> 'force_detach_volume'
> that is calling Cinder API passing a 'None' connector. At the time this was
> added several CIs
> went down, and people started discussing whether this (accepting/sending a
> None connector)
> would be the proper behavior for what is expected to a driver to do[1]. So,
> some of CIs started
> just skipping that test[2][3][4] and others implemented fixes that made the
> driver to disconnected
> the volume from all hosts if a None connector was received[5][6][7].

Right, it was determined the correct behavior for this was to disconnect the
volume from all hosts. The CIs that are skipping this test should stop doing so
(once their drivers are fixed of course).

> 
> While implementing this fix seems to be straightforward, I feel that just
> removing the volume
> from all hosts is not the correct thing to do mainly considering that we
> can have multi-attach.
> 

I don't think multiattach makes a difference here. Someone is forcibly
detaching the volume and not specifying an individual connection. So based on
that, Cinder should be removing any connections, whether that is to one or
several hosts.

> So, my questions are: What is the best way to fix this problem? Should
> Cinder API continue to
> accept detachments with None connectors? If, so, what would be the effects
> on other Nova
> attachments for the same volume? Is there any side effect if the volume is
> not multi-attached?
> 
> Additionally to this thread here, I should bring this topic to tomorrow's
> Cinder's meeting,
> so please join if you have something to share.
> 

+1 - good plan.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-17 Thread Ben Nemec



On 07/17/2018 03:00 PM, Michele Baldessari wrote:

Hi Jarda,

thanks for these perspectives, this is very valuable!

On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote:

Not rooting for any approach here, just want to add a bit of factors which 
might play a role when deciding which way to go:

A) Performance matters, we should be improving simplicity and speed of
deployments rather than making it heavier. If the deployment time and
resource consumption is not significantly higher, I think it doesn’t
cause an issue. But if there is a significant difference between PCMK
and keepalived architecture, we would need to review that.


+1 Should the pcmk take substantially more time then I agree, not worth
defaulting to it. Worth also exploring how we could tweak things
to make the setup of the cluster a bit faster (on a single node we can
lower certain wait operations) but full agreement on this point.


B) Containerization of PCMK plans - eventually we would like to run
the whole undercloud/overcloud on minimal OS in containers to keep
improving the operations on the nodes (updates/upgrades/etc). If
because PCMK we would be forever stuck on BM, it would be a bit of
pita. As Michele said, maybe we can re-visit this.


So I briefly discussed this in our team, and while it could be
re-explored, we need to be very careful about the tradeoffs.
This would be another layer which would bring quite a bit of complexity
(pcs commands would have to be run inside a container, speed tradeoffs,
more limited possibilities when it comes to upgrading/updating, etc.)


C) Unification of undercloud/overcloud is important for us, so +1 to
whichever method is being used in both. But what I know, HA folks went
to keepalived since it is simpler so would be good to keep in sync
(and good we have their presence here actually) :)


Right so to be honest, the choice of keepalived on the undercloud for
VIP predates me and I was not directly involved, so I lack the exact
background for that choice (and I could not quickly reconstruct it from git
history). But I think it is/was a reasonable choice for what it needs
doing, although I probably would have picked just configuring the extra
VIPs on the interfaces and have one service less to care about.
+1 in general on the unification, with the caveats that have been
discussed so far.


The only reason there even are vips on the undercloud is that we wanted 
ssl support, and we implemented that through the same haproxy puppet 
manifest as the overcloud, which required vips.  Keepalived happened to 
be what it was using to provide vips at the time, so that's what we 
ended up with.  There wasn't a conscious decision to use keepalived over 
anything else.





D) Undercloud HA is a nice have which I think we want to get to one
day, but it is not in as big demand as for example edge deployments,
BM provisioning with pure OS, or multiple envs managed by single
undercloud. So even though undercloud HA is important, it won’t bring
operators as many benefits as the previously mentioned improvements.
Let’s keep it in mind when we are considering the amount of work
needed for it.


+100


I'm still of the opinion that undercloud HA shouldn't be a thing.   It 
brings with it a whole host of problems and I have yet to hear a 
realistic use case that actually requires it.  We were quite careful to 
make sure that the overcloud can continue to run indefinitely without 
the undercloud during downtime.


*Maybe* sometime in the future when those other features are implemented 
it will make more sense, but I don't think it does right now.





E) One of the use-cases we want to take into account is expanind a
single-node deployment (all-in-one) to 3 node HA controller. I think
it is important when evaluating PCMK/keepalived


Right, so to be able to implement this, there is no way around having
pacemaker (at least today until we have galera and rabbit).
It still does not mean we have to default to it, but if you want to
scale beyond one node, then there is no other option atm.


HTH


It did, thanks!

Michele

— Jarda


On Jul 17, 2018, at 05:04, Emilien Macchi  wrote:

Thanks everyone for the feedback, I've made a quick PoC:
https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default

And I'm currently doing local testing. I'll publish results when progress is 
made, but I've made it so we have the choice to enable pacemaker (disabled by 
default), where keepalived would remain the default for now.

On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari  wrote:
On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote:

On Mon, Jul 16, 2018 at 11:42 AM Dan Prince  wrote:
[...]


The biggest downside IMO is the fact that our Pacemaker integration is
not containerized. Nor are there any plans to finish the
containerization of it. Pacemaker has to currently run on baremetal
and this makes the installation of it for small dev/test setups a lot
less desirable. It can launch containers 

Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-17 Thread Michele Baldessari
Hi Jarda,

thanks for these perspectives, this is very valuable!

On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote:
> Not rooting for any approach here, just want to add a bit of factors which 
> might play a role when deciding which way to go:
> 
> A) Performance matters, we should be improving simplicity and speed of
> deployments rather than making it heavier. If the deployment time and
> resource consumption is not significantly higher, I think it doesn’t
> cause an issue. But if there is a significant difference between PCMK
> and keepalived architecture, we would need to review that.

+1 Should the pcmk take substantially more time then I agree, not worth
defaulting to it. Worth also exploring how we could tweak things
to make the setup of the cluster a bit faster (on a single node we can
lower certain wait operations) but full agreement on this point.

> B) Containerization of PCMK plans - eventually we would like to run
> the whole undercloud/overcloud on minimal OS in containers to keep
> improving the operations on the nodes (updates/upgrades/etc). If
> because PCMK we would be forever stuck on BM, it would be a bit of
> pita. As Michele said, maybe we can re-visit this.

So I briefly discussed this in our team, and while it could be
re-explored, we need to be very careful about the tradeoffs.
This would be another layer which would bring quite a bit of complexity
(pcs commands would have to be run inside a container, speed tradeoffs,
more limited possibilities when it comes to upgrading/updating, etc.)

> C) Unification of undercloud/overcloud is important for us, so +1 to
> whichever method is being used in both. But what I know, HA folks went
> to keepalived since it is simpler so would be good to keep in sync
> (and good we have their presence here actually) :)

Right so to be honest, the choice of keepalived on the undercloud for
VIP predates me and I was not directly involved, so I lack the exact
background for that choice (and I could not quickly reconstruct it from git
history). But I think it is/was a reasonable choice for what it needs
doing, although I probably would have picked just configuring the extra
VIPs on the interfaces and have one service less to care about.
+1 in general on the unification, with the caveats that have been
discussed so far.

> D) Undercloud HA is a nice have which I think we want to get to one
> day, but it is not in as big demand as for example edge deployments,
> BM provisioning with pure OS, or multiple envs managed by single
> undercloud. So even though undercloud HA is important, it won’t bring
> operators as many benefits as the previously mentioned improvements.
> Let’s keep it in mind when we are considering the amount of work
> needed for it.

+100

> E) One of the use-cases we want to take into account is expanind a
> single-node deployment (all-in-one) to 3 node HA controller. I think
> it is important when evaluating PCMK/keepalived 

Right, so to be able to implement this, there is no way around having
pacemaker (at least today until we have galera and rabbit).
It still does not mean we have to default to it, but if you want to
scale beyond one node, then there is no other option atm.

> HTH

It did, thanks!

Michele
> — Jarda
> 
> > On Jul 17, 2018, at 05:04, Emilien Macchi  wrote:
> > 
> > Thanks everyone for the feedback, I've made a quick PoC:
> > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default
> > 
> > And I'm currently doing local testing. I'll publish results when progress 
> > is made, but I've made it so we have the choice to enable pacemaker 
> > (disabled by default), where keepalived would remain the default for now.
> > 
> > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari  
> > wrote:
> > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote:
> > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince  wrote:
> > > [...]
> > > 
> > > > The biggest downside IMO is the fact that our Pacemaker integration is
> > > > not containerized. Nor are there any plans to finish the
> > > > containerization of it. Pacemaker has to currently run on baremetal
> > > > and this makes the installation of it for small dev/test setups a lot
> > > > less desirable. It can launch containers just fine but the pacemaker
> > > > installation itself is what concerns me for the long term.
> > > >
> > > > Until we have plans for containizing it I suppose I would rather see
> > > > us keep keepalived as an option for these smaller setups. We can
> > > > certainly change our default Undercloud to use Pacemaker (if we choose
> > > > to do so). But having keepalived around for "lightweight" (zero or low
> > > > footprint) installs that work is really quite desirable.
> > > >
> > > 
> > > That's a good point, and I agree with your proposal.
> > > Michele, what's the long term plan regarding containerized pacemaker?
> > 
> > Well, we kind of started evaluating it (there was definitely not enough
> > time around pike/queens as we 

[openstack-dev] [tc] Technical Committee update for 17 July

2018-07-17 Thread Doug Hellmann
This is the weekly summary of work being done by the Technical
Committee members. The full list of active items is managed in the
wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923

== Recent Activity ==

Project updates:

- Add ansible-role-openstack-operations to governance
  https://review.openstack.org/#/c/578963/

- Add ansible-role-tripleo-* to TripleO project
  https://review.openstack.org/#/c/579952/1

Other approved changes:

- update the PTI to use tox for building docs
  https://review.openstack.org/#/c/580495/

Office hour logs:

(I sent the update late last week, so we have only had one office hour since 
the last update.)

- 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T14:59:39

== Ongoing Discussions ==

The Adjutant team application has been approved. Welcome!

- https://review.openstack.org/553643 project team application
- http://lists.openstack.org/pipermail/openstack-dev/2018-July/132308.html 
welcome announcement

Tony and the rest of the election officials are scheduling the Stein
PTL elections.

- https://review.openstack.org/#/c/582109/ stein PTL election preparations

Zane has updated his proposal for diversity requirements or guidance
for new project teams.

- https://review.openstack.org/#/c/567944/

== TC member actions/focus/discussions for the coming week(s) ==

We've made good progress on the health checks. If you anticipate
having any trouble contacting your assigned teams before the PTG
please let me know.

Remember that we agreed to send status updates on initiatives
separately to openstack-dev every two weeks. If you are working on
something for which there has not been an update in a couple of
weeks, please consider summarizing the status.

== Contacting the TC ==

The Technical Committee uses a series of weekly "office hour" time
slots for synchronous communication. We hope that by having several
such times scheduled, we will have more opportunities to engage
with members of the community from different timezones.

Office hour times in #openstack-tc:

- 09:00 UTC on Tuesdays
- 01:00 UTC on Wednesdays
- 15:00 UTC on Thursdays

If you have something you would like the TC to discuss, you can add
it to our office hour conversation starter etherpad at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Many of us also run IRC bouncers which stay in #openstack-tc most
of the time, so please do not feel that you need to wait for an
office hour time to pose a question or offer a suggestion. You can
use the string "tc-members" to alert the members to your question.

You will find channel logs with past conversations at
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/

If you expect your topic to require significant discussion or to
need input from members of the community other than the TC, please
start a mailing list discussion on openstack-dev at lists.openstack.org
and use the subject tag "[tc]" to bring it to the attention of TC
members.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Erlon Cruz
Hi Cinder and Nova folks,

Working on some tests for our drivers, I stumbled upon this tempest test
'force_detach_volume'
that is calling Cinder API passing a 'None' connector. At the time this was
added several CIs
went down, and people started discussing whether this (accepting/sending a
None connector)
would be the proper behavior for what is expected to a driver to do[1]. So,
some of CIs started
just skipping that test[2][3][4] and others implemented fixes that made the
driver to disconnected
the volume from all hosts if a None connector was received[5][6][7].

While implementing this fix seems to be straightforward, I feel that just
removing the volume
from all hosts is not the correct thing to do mainly considering that we
can have multi-attach.

So, my questions are: What is the best way to fix this problem? Should
Cinder API continue to
accept detachments with None connectors? If, so, what would be the effects
on other Nova
attachments for the same volume? Is there any side effect if the volume is
not multi-attached?

Additionally to this thread here, I should bring this topic to tomorrow's
Cinder's meeting,
so please join if you have something to share.

Erlon

___
[1] https://bugs.launchpad.net/cinder/+bug/1686278
[2]
https://openstack-ci-logs.aws.infinidat.com/14/578114/2/check/dsvm-tempest-infinibox-fc/14fa930/console.html
[3]
http://54.209.116.144/14/578114/2/check/kaminario-dsvm-tempest-full-iscsi/ce750c8/console.html
[4]
http://logs.openstack.netapp.com/logs/14/578114/2/upstream-check/cinder-cDOT-iSCSI/8e2c549/console.html#_2018-07-16_20_06_16_937286
[5]
https://review.openstack.org/#/c/551832/1/cinder/volume/drivers/dell_emc/vnx/adapter.py
[6]
https://review.openstack.org/#/c/550324/2/cinder/volume/drivers/hpe/hpe_3par_common.py
[7]
https://review.openstack.org/#/c/536778/2/cinder/volume/drivers/infinidat.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] Heads up for the new oslo.policy release

2018-07-17 Thread Ben Nemec
I just wanted to send a quick note about the recent oslo.policy release 
which may impact some projects.  Some new functionality was added that 
allows a context object to be passed in to the enforcer directly, but as 
part of that we added a check that the type of the object passed in was 
valid for use.  This caused an issue in Glance's unit tests because they 
were mocking the context object and a Mock object didn't pass the type 
check.  This was fixed in [1], but if any other projects have a similar 
pattern in their unit tests it is possible it may affect them as well.


If you do run into any issues with this, please contact the Oslo team in 
#openstack-oslo or with the [oslo] tag on the mailing list so we can 
help resolve them.  Thanks.


-Ben

1: https://review.openstack.org/#/c/582995/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Reminder about Oslo feature freeze

2018-07-17 Thread Ben Nemec
And we are now officially in feature freeze for Oslo libraries.  Only 
bugfixes should be going in at this point.


I will note that the config drivers work is still in the process of 
merging because some of the later patches in that series got hung up on 
a unit test bug.  I'm holding off on doing final feature releases until 
that has all merged.


-Ben

On 07/05/2018 11:46 AM, Ben Nemec wrote:

Hi,

This is just a reminder that Oslo observes feature freeze earlier than 
other projects so those projects have time to implement any new features 
from Oslo.  Per the policy[1] we freeze one week before the non-client 
library feature freeze, which is coming in two weeks.  Therefore, we 
have about one week to land new features in Oslo.  Anything that misses 
the deadline will most likely need to wait until Stein.


Feel free to contact the Oslo team with any comments or questions.  Thanks.

-Ben

1: 
http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How to look up a project name from Neutron server code?

2018-07-17 Thread Neil Jerram
Thanks Aditya, that looks like just what I need.

Best wishes,
Neil


On Tue, Jul 17, 2018 at 5:48 PM Aditya Vaja  wrote:

> hey neil,
>
> neutron.conf has a section called '[keystone_authtoken]’ which has
> credentials to query keystone as neutron. you can read the config as you’d
> typically do from the mechanism driver for any other property using
> oslo.config.
>
> you could then use python-keystoneclient with those creds to query the
> mapping. a sample is given in the keystoneclient repo [1].
>
> via telegram
>
> [1]
> https://github.com/openstack/python-keystoneclient/blob/650716d0dd30a73ccabe3f0ec20eb722ca0d70d4/keystoneclient/v3/client.py#L102-L116
> On Tue, Jul 17, 2018 at 9:58 PM, Neil Jerram  wrote:
>
> On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes  wrote:
>
>> On 07/17/2018 03:36 AM, Neil Jerram wrote:
>> > Can someone help me with how to look up a project name (aka tenant
>> name)
>> > for a known project/tenant ID, from code (specifically a mechanism
>> > driver) running in the Neutron server?
>> >
>> > I believe that means I need to make a GET REST call as here:
>> > https://developer.openstack.org/api-ref/identity/v3/index.html#projects.
>> But
>> > I don't yet understand how a piece of Neutron server code can ensure
>> > that it has the right credentials to do that. If someone happens to
>> > have actual code for doing this, I'm sure that would be very helpful.
>> >
>> > (I'm aware that whenever the Neutron server processes an API request,
>> > the project name for the project that generated that request is added
>> > into the request context. That is great when my code is running in an
>> > API request context. But there are other times when the code isn't in a
>> > request context and still needs to map from a project ID to project
>> > name; hence the question here.)
>>
>> Hi Neil,
>>
>> You basically answered your own question above :) The neutron request
>> context gets built from oslo.context's Context.from_environ() [1] which
>> has this note in the implementation [2]:
>>
>> # Load a new context object from the environment variables set by
>> # auth_token middleware. See:
>> #
>>
>> https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>>
>> So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME.
>> If you don't have access to a HTTP headers, then you'll need to pass
>> some context object/struct to the code you're referring to. Might as
>> well pass the neutron RequestContext (derived from oslo_context.Context)
>> to the code you're referring to and you get all this for free.
>>
>> Best,
>> -jay
>>
>> [1]
>>
>> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424
>>
>> [2]
>>
>> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435
>
>
> Many thanks for this reply, Jay.
>
> If I'm understanding fully, I believe it all works beautifully so long as
> the Neutron server is processing a specific API request, e.g. a port CRUD
> operation. Then, as you say, the RequestContext includes the name of the
> project/tenant that originated that request.
>
> I have an additional requirement, though, to do a occasional audit of
> standing resources in the Neutron DB, and to check that my mechanism
> driver's programming for them is correct. To do that, I have an independent
> eventlet thread that runs in admin context and occasionally queries Neutron
> resources, e.g. all the ports. For each port, the Neutron DB data includes
> the project_id, but not project_name, and I'd like at that point to be able
> to map from the project_id for each port to project_name.
>
> Do you have any thoughts on how I could do that? (E.g. perhaps there is
> some way of generating and looping round a request with the project_id,
> such that the middleware populates the project_name... but that sounds a
> bit baroque; I would hope that there would be a way of doing a simpler
> Keystone DB lookup.)
>
> Regards,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Fox, Kevin M
Inlining with KF> 

From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, July 17, 2018 7:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Finally found the time to properly read this...

Zane Bitter wrote:
> [...]
> We chose to add features to Nova to compete with vCenter/oVirt, and not
> to add features the would have enabled OpenStack as a whole to compete
> with more than just the compute provisioning subset of EC2/Azure/GCP.

Could you give an example of an EC2 action that would be beyond the
"compute provisioning subset" that you think we should have built into
Nova ?

KF> How about this one... 
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
 :/
KF> IMO, its lack really crippled the use case. I've been harping on this one 
for over 4 years now...

> Meanwhile, the other projects in OpenStack were working on building the
> other parts of an AWS/Azure/GCP competitor. And our vague one-sentence
> mission statement allowed us all to maintain the delusion that we were
> all working on the same thing and pulling in the same direction, when in
> truth we haven't been at all.

Do you think that organizing (tying) our APIs along [micro]services,
rather than building a sanely-organized user API on top of a
sanely-organized set of microservices, played a role in that divide ?

KF> Slightly off question I think. A combination of microservice api's + no api 
team to look at the api's as a whole allowed use cases to slip by.
KF> Microservice api might have been ok with overall shepards. Though maybe 
that is what you were implying with 'sanely'?

> We can decide that we want to be one, or the other, or both. But if we
> don't all decide together then a lot of us are going to continue wasting
> our time working at cross-purposes.

If you are saying that we should choose between being vCenter or AWS, I
would definitely say the latter. But I'm still not sure I see this issue
in such a binary manner.

KF> No, he said one, the either, or both. But the lack of decision allowed some 
teams to prioritize one without realizing its effects to others.

KF> There are multiple vCenter replacements in opensource world. For example, 
oVirt. Its alreay way better at it then Nova.
KF> There is not a replacement for AWS in the opensource world. The hope was 
OpenStack would be that, but others in the community did not agree with that 
vision.
KF> Now that the community has changed drastically, what is the feeling now? We 
must decide.
KF> Kubernetes has provided a solid base for doing cloudy things. Which is 
great. But the organization does not care to replace other AWS/Azure/etc 
services because there are companies interested in selling k8s on top of 
AWS/Azure/etc and integrate with the other services they already provide.
KF> So, there is an Opportunity in the opensource community still for someone 
to write an opensource AWS alternative. VM's are just a very small part of it.

KF> Is that OpenStack, or some other project?

Imagine if (as suggested above) we refactored the compute node and give
it a user API, would that be one, the other, both ? Or just a sane
addition to improve what OpenStack really is today: a set of open
infrastructure components providing different services with each their
API, with slight gaps and overlaps between them ?

Personally, I'm not very interested in discussing what OpenStack could
have been if we started building it today. I'm much more interested in
discussing what to add or change in order to make it usable for more use
cases while continuing to serve the needs of our existing users. And I'm
not convinced that's an either/or choice...

KF> Sometimes it is time to hit the reset button because you either:
 a> you know something more then you did when you built that is really important
 b> the world changed and you can no longer going on the path you were
 c> the technical debt has grown very large and it is cheaper to start again

KF> OpenStacks current architectural implementation really feels 1.0ish to me 
and all of those reasons are relevant.
KF> I'm not saying we should just blindly hit the reset button. but I think it 
should be discussed/evaluated . Leaving it alone may have too much of a 
dragging effect on contribution.

KF> I'm also not saying we leave existing users without a migration path 
either. Maybe an OpenStack 2.0 with migration tools would be an option.

KF> OpenStacks architecture is really hamstringing it at this point. If it 
wants to make progress at chipping away at AWS, it can't be trying to build on 
top of the very narrow commons OpenStack provides at present and the boiler 
plate convention of 1, start new project 2, create sql databse, 3, create 
rabbit queues, 5, create api service, 6 create scheduler service, 7, create 
agents, 9, create keystone endpoints, 10, get it wrapped in 32 different 
deployment tools, 11, etc

Thanks,

[openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project!

2018-07-17 Thread Doug Hellmann
The Adjutant team's application [1] to become an official project
has been approved. Welcome!

As I said on the review, because it is past the deadline for Rocky
membership, Adjutant will not be considered part of the Rocky
release, but a future release can be part of Stein.

The team should complete the onboarding process for new projects,
including holding PTL elections for Stein, setting up deliverable
files in the openstack/releases repository, and adding meeting
information to eavesdrop.openstack.org.

I have left a comment on the patch setting up the Stein election
to ask that the Adjutant team be included.  We can also add Adjutant
to the list of projects on docs.openstack.org for Stein, after
updating your publishing job(s).

Doug

[1] https://review.openstack.org/553643

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Browbeat] proposing agopi as core

2018-07-17 Thread Joe Talerico
agopi**

On Tue, Jul 17, 2018 at 1:33 PM, Joe Talerico  wrote:

> Proposing
> ​agopi
>  as core for OpenStack Browbeat. He has been instruemntal in taking over
> the CI components of Browbeat. His contributions and reviews reflect that!
>
> Thanks!
> Joe
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Browbeat] proposing agpoi as core

2018-07-17 Thread Joe Talerico
Proposing agpoi as core for OpenStack Browbeat. He has been instruemntal in
taking over the CI components of Browbeat. His contributions and reviews
reflect that!

Thanks!
Joe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How to look up a project name from Neutron server code?

2018-07-17 Thread Aditya Vaja
hey neil,
neutron.conf has a section called ' [keystone_authtoken]’ which has credentials 
to query keystone as neutron. you can read the config as you’d typically do 
from the mechanism driver for any other property using oslo.config.
you could then use python-keystoneclient with those creds to query the mapping. 
a sample is given in the keystoneclient repo [1].
via telegram
[1] 
https://github.com/openstack/python-keystoneclient/blob/650716d0dd30a73ccabe3f0ec20eb722ca0d70d4/keystoneclient/v3/client.py#L102-L116
 On Tue, Jul 17, 2018 at 9:58 PM, Neil Jerram  wrote:
On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes < jaypi...@gmail.com 
[jaypi...@gmail.com] > wrote:
On 07/17/2018 03:36 AM, Neil Jerram wrote:
> Can someone help me with how to look up a project name (aka tenant name)
> for a known project/tenant ID, from code (specifically a mechanism
> driver) running in the Neutron server?
>
> I believe that means I need to make a GET REST call as here:
> https://developer.openstack.org/api-ref/identity/v3/index.html#projects 
> [https://developer.openstack.org/api-ref/identity/v3/index.html#projects] . 
> But
> I don't yet understand how a piece of Neutron server code can ensure
> that it has the right credentials to do that. If someone happens to
> have actual code for doing this, I'm sure that would be very helpful.
>
> (I'm aware that whenever the Neutron server processes an API request,
> the project name for the project that generated that request is added
> into the request context. That is great when my code is running in an
> API request context. But there are other times when the code isn't in a
> request context and still needs to map from a project ID to project
> name; hence the question here.)

Hi Neil,

You basically answered your own question above :) The neutron request
context gets built from oslo.context's Context.from_environ() [1] which
has this note in the implementation [2]:

# Load a new context object from the environment variables set by
# auth_token middleware. See:
#
https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
 
[https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service]

So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME.
If you don't have access to a HTTP headers, then you'll need to pass
some context object/struct to the code you're referring to. Might as
well pass the neutron RequestContext (derived from oslo_context.Context)
to the code you're referring to and you get all this for free.

Best,
-jay

[1]
https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424
 
[https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424]

[2]
https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435
 
[https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435]
Many thanks for this reply, Jay.
If I'm understanding fully, I believe it all works beautifully so long as the 
Neutron server is processing a specific API request, e.g. a port CRUD 
operation. Then, as you say, the RequestContext includes the name of the 
project/tenant that originated that request.
I have an additional requirement, though, to do a occasional audit of standing 
resources in the Neutron DB, and to check that my mechanism driver's 
programming for them is correct. To do that, I have an independent eventlet 
thread that runs in admin context and occasionally queries Neutron resources, 
e.g. all the ports. For each port, the Neutron DB data includes the project_id, 
but not project_name, and I'd like at that point to be able to map from the 
project_id for each port to project_name.
Do you have any thoughts on how I could do that? (E.g. perhaps there is some 
way of generating and looping round a request with the project_id, such that 
the middleware populates the project_name... but that sounds a bit baroque; I 
would hope that there would be a way of doing a simpler Keystone DB lookup.)
Regards, Neil

__ 
OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How to look up a project name from Neutron server code?

2018-07-17 Thread Neil Jerram
On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes  wrote:

> On 07/17/2018 03:36 AM, Neil Jerram wrote:
> > Can someone help me with how to look up a project name (aka tenant name)
> > for a known project/tenant ID, from code (specifically a mechanism
> > driver) running in the Neutron server?
> >
> > I believe that means I need to make a GET REST call as here:
> > https://developer.openstack.org/api-ref/identity/v3/index.html#projects.
> But
> > I don't yet understand how a piece of Neutron server code can ensure
> > that it has the right credentials to do that.  If someone happens to
> > have actual code for doing this, I'm sure that would be very helpful.
> >
> > (I'm aware that whenever the Neutron server processes an API request,
> > the project name for the project that generated that request is added
> > into the request context.  That is great when my code is running in an
> > API request context.  But there are other times when the code isn't in a
> > request context and still needs to map from a project ID to project
> > name; hence the question here.)
>
> Hi Neil,
>
> You basically answered your own question above :) The neutron request
> context gets built from oslo.context's Context.from_environ() [1] which
> has this note in the implementation [2]:
>
> # Load a new context object from the environment variables set by
> # auth_token middleware. See:
> #
>
> https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>
> So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME.
> If you don't have access to a HTTP headers, then you'll need to pass
> some context object/struct to the code you're referring to. Might as
> well pass the neutron RequestContext (derived from oslo_context.Context)
> to the code you're referring to and you get all this for free.
>
> Best,
> -jay
>
> [1]
>
> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424
>
> [2]
>
> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435


Many thanks for this reply, Jay.

If I'm understanding fully, I believe it all works beautifully so long as
the Neutron server is processing a specific API request, e.g. a port CRUD
operation.  Then, as you say, the RequestContext includes the name of the
project/tenant that originated that request.

I have an additional requirement, though, to do a occasional audit of
standing resources in the Neutron DB, and to check that my mechanism
driver's programming for them is correct.  To do that, I have an
independent eventlet thread that runs in admin context and occasionally
queries Neutron resources, e.g. all the ports.  For each port, the Neutron
DB data includes the project_id, but not project_name, and I'd like at that
point to be able to map from the project_id for each port to project_name.

Do you have any thoughts on how I could do that?  (E.g. perhaps there is
some way of generating and looping round a request with the project_id,
such that the middleware populates the project_name... but that sounds a
bit baroque; I would hope that there would be a way of doing a simpler
Keystone DB lookup.)

Regards,
Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks

2018-07-17 Thread Giulio Fidente
On 07/10/2018 04:20 PM, Jiří Stránský wrote:
> Hi,
> 
> with the move to config-download deployments, we'll be moving from
> executing external installers (like ceph-ansible) via Heat resources
> encapsulating Mistral workflows towards executing them via Ansible
> directly (nested Ansible process via external_deploy_tasks).
> 
> Updates and upgrades still need to be addressed here. I think we should
> introduce external_update_tasks and external_upgrade_tasks for this
> purpose, but i see two options how to construct the workflow with them.
> 
> During update (mentioning just updates, but upgrades would be done
> analogously) we could either:
> 
> A) Run external_update_tasks, then external_deploy_tasks.
> 
> This works with the assumption that updates are done very similarly to
> deployment. The external_update_tasks could do some prep work and/or
> export Ansible variables which then could affect what
> external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably
> override the playbook path). This way we could also disable specific
> parts of external_deploy_tasks on update, in case reuse is undesirable
> in some places.
thanks

+1 on A from me as well

we currently cycle through a list of playbooks to execute which can be
given as a Heat parameter ... I suppose we'll need to find a way to make
an ansible variable override the Heat value
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-29

2018-07-17 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-29.html

Again a relatively slow week for TC discussion. Several members were
travelling for one reason or another.

A theme from the past week is a recurring one: How can OpenStack,
the community, highlight gaps where additional contribution may be
needed, and what can the TC, specifically, do to help?

Julia relayed [that question on
Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T00:39:16)
and it meandered a bit from there. Are the mechanics of open source
a bit strange in OpenStack because of continuing boundaries between
the people who sell it, package it, build it, deploy it, operate it,
and use it? If so, how do we accelerate blurring those boundaries?
The [combined
PTG](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T00:39:16)
will help, some.

At Thursday's office hours Alan Clark [listened
in](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T15:02:34).
He's a welcome presence from the Foundation Board. At the last
summit in Vancouver members of the TC and the Board made a
commitment to improve communication. Meanwhile, [back on
Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T15:29:30)
I expressed a weird sense of jealousy of all the nice visible things
one sees the foundation doing for the newer strategic areas in the
foundation. The issue here is not that the foundation doesn't do
stuff for OpenStack-classic, but that the new stuff is visible and
_over there_.

That office hour included [more
talk](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T15:07:27)
about project-need visibility.

Lately, I've been feeling that it is more important to make the gaps
in contribution visible than it is to fill them. If we continue to
perform above and beyond, there is no incentive for our corporate
value extractors to supplement their investment. That way lies
burnout. The [health
tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker)
is part of making things more visible. So are [OpenStack wide
goals](https://governance.openstack.org/tc/goals/index.html). But
there is more we can do as a community and as individuals. Don't be
a hero: If you're overwhelmed or overworked tell your peers and your
management.

In other news: Zane summarized some of his thoughts about
[Limitations of the Layered Model of
OpenStack](https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations).
This is a continuation of the technical vision discussions that have
been happening on [an
etherpad](https://etherpad.openstack.org/p/tech-vision-2018) and
[email
thread](http://lists.openstack.org/pipermail/openstack-dev/2018-July/131955.html).

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-07-17 Thread Emilien Macchi
Your fellow reporter took a break from writing, but is now back on his pen.

Welcome to the twenty-fifth edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html

+-+
| General announcements |
+-+

+--> Rocky Milestone 3 is next week. After, any feature code will require
Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a
bug-fix only and stabilization period, until we can push the first stable
version of Rocky.
+--> Next PTG will be in Denver, please propose topics:
https://etherpad.openstack.org/p/tripleoci-ptg-stein
+--> Multiple squads are currently brainstorming a framework to provide
validations pre/post upgrades - stay in touch!

+--+
| Continuous Integration |
+--+

+--> Sprint theme: migration to Zuul v3 (More on
https://trello.com/c/vyWXcKOB/841-sprint-16-goals)
+--> Sagi is the rover and Chandan is the ruck. Please tell them any CI
issue.
+--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day on
Ocata.
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> Good progress on major upgrades workflow, need reviews!
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> We switched python-tripleoclient to deploy containerized undercloud by
default!
+--> Image prepare via workflow is still work in progress.
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> UI integration is almost done (need review)
+--> Bug with failure listing is being fixed:
https://bugs.launchpad.net/tripleo/+bug/1779093
+--> More:
https://etherpad.openstack.org/p/tripleo-config-download-squad-status

+--+
| Integration |
+--+

+--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK etc:
https://review.openstack.org/#/q/topic:alternate_plans+(status:open+OR+status:merged)
(need reviews).
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> Good progress on network configuration via UI
+--> Config-download patches are being reviewed and a lot of testing is
going on.
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> Working on OpenShift validations, need reviews.
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Working on Secrets management and Limit TripleO users efforts
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++
Elf owls live in a cacti. They are the smallest owls, and live in the
southwestern United States and Mexico. It will sometimes make its home in
the giant saguaro cactus, nesting in holes made by other animals. However,
the elf owl isn’t picky and will also live in trees or on telephone poles.

Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls

Thank you all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-07-17 Thread Jimmy McArthur

Ian raises some great points :) I'll try to address below...

Ian Y. Choi wrote:

Hello,

When I saw overall translation source strings on container whitepaper, 
I would infer that new edge computing whitepaper

source strings would include HTML markup tags.
One of the things I discussed with Ian and Frank in Vancouver is the 
expense of recreating PDFs with new translations.  It's prohibitively 
expensive for the Foundation as it requires design resources which we 
just don't have.  As a result, we created the Containers whitepaper in 
HTML, so that it could be easily updated w/o working with outside design 
contractors.  I indicated that we would also be moving the Edge paper to 
HTML so that we could prevent that additional design resource cost.

On the other hand, the source strings of edge computing whitepaper
which I18n team previously translated do not include HTML markup tags, 
since the source strings are based on just text format.
The version that Akihiro put together was based on the Edge PDF, which 
we unfortunately didn't have the resources to implement in the same format.


I really appreciate Akihiro's work on RST-based support on publishing 
translated edge computing whitepapers, since
translators do not have to re-translate all the strings. 
I would like to second this. It took a lot of initiative to work on the 
RST-based translation.  At the moment, it's just not usable for the 
reasons mentioned above.

On the other hand, it seems that I18n team needs to investigate on
translating similar strings of HTML-based edge computing whitepaper 
source strings, which would discourage translators.
Can you expand on this? I'm not entirely clear on why the HTML based 
translation is more difficult.


That's my point of view on translating edge computing whitepaper.

For translating container whitepaper, I want to further ask the 
followings since *I18n-based tools*
would mean for translators that translators can test and publish 
translated whitepapers locally:


- How to build translated container whitepaper using original 
Silverstripe-based repository?
  https://docs.openstack.org/i18n/latest/tools.html describes well how 
to build translated artifacts for RST-based OpenStack repositories
  but I could not find the way how to build translated container 
whitepaper with translated resources on Zanata.
This is a little tricky.  It's possible to set up a local version of the 
OpenStack website 
(https://github.com/OpenStackweb/openstack-org/blob/master/installation.md).  
However, we have to manually ingest the po files as they are completed 
and then push them out to production, so that wouldn't do much to help 
with your local build.  I'm open to suggestions on how we can make this 
process easier for the i18n team.


Thank you,
Jimmy



With many thanks,

/Ian

Jimmy McArthur wrote on 7/17/2018 11:01 PM:

Frank,

I'm sorry to hear about the displeasure around the Edge paper.  As 
mentioned in a prior thread, the RST format that Akihiro worked did 
not work with the  Zanata process that we have been using with our 
CMS.  Additionally, the existing EDGE page is a PDF, so we had to 
build a new template to work with the new HTML whitepaper layout we 
created for the Containers paper. I outlined this in the thread " 
[OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing 
Whitepaper Translation" on 6/25/18 and mentioned we would be ready 
with the template around 7/13.


We completed the work on the new whitepaper template and then put out 
the pot files on Zanata so we can get the po language files back. If 
this process is too cumbersome for the translation team, I'm open to 
discussion, but right now our entire translation process is based on 
the official OpenStack Docs translation process outlined by the i18n 
team: https://docs.openstack.org/i18n/latest/en_GB/tools.html


Again, I realize Akihiro put in some work on his own proposing the 
new translation type. If the i18n team is moving to this format 
instead, we can work on redoing our process.


Please let me know if I can clarify further.

Thanks,
Jimmy

Frank Kloeker wrote:

Hi Jimmy,

permission was added for you and Sebastian. The Container Whitepaper 
is on the Zanata frontpage now. But we removed Edge Computing 
whitepaper last week because there is a kind of displeasure in the 
team since the results of translation are still not published beside 
Chinese version. It would be nice if we have a commitment from the 
Foundation that results are published in a specific timeframe. This 
includes your requirements until the translation should be available.


thx Frank

Am 2018-07-16 17:26, schrieb Jimmy McArthur:

Sorry, I should have also added... we additionally need permissions so
that we can add the a new version of the pot file to this project:
https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 



Thanks!
Jimmy



Jimmy McArthur wrote:

Hi all -

We have both of the current whitepapers up and 

Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-17 Thread Jaromir Coufal
Not rooting for any approach here, just want to add a bit of factors which 
might play a role when deciding which way to go:

A) Performance matters, we should be improving simplicity and speed of 
deployments rather than making it heavier. If the deployment time and resource 
consumption is not significantly higher, I think it doesn’t cause an issue. But 
if there is a significant difference between PCMK and keepalived architecture, 
we would need to review that.

B) Containerization of PCMK plans - eventually we would like to run the whole 
undercloud/overcloud on minimal OS in containers to keep improving the 
operations on the nodes (updates/upgrades/etc). If because PCMK we would be 
forever stuck on BM, it would be a bit of pita. As Michele said, maybe we can 
re-visit this.

C) Unification of undercloud/overcloud is important for us, so +1 to whichever 
method is being used in both. But what I know, HA folks went to keepalived 
since it is simpler so would be good to keep in sync (and good we have their 
presence here actually) :)

D) Undercloud HA is a nice have which I think we want to get to one day, but it 
is not in as big demand as for example edge deployments, BM provisioning with 
pure OS, or multiple envs managed by single undercloud. So even though 
undercloud HA is important, it won’t bring operators as many benefits as the 
previously mentioned improvements. Let’s keep it in mind when we are 
considering the amount of work needed for it.

E) One of the use-cases we want to take into account is expanind a single-node 
deployment (all-in-one) to 3 node HA controller. I think it is important when 
evaluating PCMK/keepalived 

HTH
— Jarda

> On Jul 17, 2018, at 05:04, Emilien Macchi  wrote:
> 
> Thanks everyone for the feedback, I've made a quick PoC:
> https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default
> 
> And I'm currently doing local testing. I'll publish results when progress is 
> made, but I've made it so we have the choice to enable pacemaker (disabled by 
> default), where keepalived would remain the default for now.
> 
> On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari  wrote:
> On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote:
> > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince  wrote:
> > [...]
> > 
> > > The biggest downside IMO is the fact that our Pacemaker integration is
> > > not containerized. Nor are there any plans to finish the
> > > containerization of it. Pacemaker has to currently run on baremetal
> > > and this makes the installation of it for small dev/test setups a lot
> > > less desirable. It can launch containers just fine but the pacemaker
> > > installation itself is what concerns me for the long term.
> > >
> > > Until we have plans for containizing it I suppose I would rather see
> > > us keep keepalived as an option for these smaller setups. We can
> > > certainly change our default Undercloud to use Pacemaker (if we choose
> > > to do so). But having keepalived around for "lightweight" (zero or low
> > > footprint) installs that work is really quite desirable.
> > >
> > 
> > That's a good point, and I agree with your proposal.
> > Michele, what's the long term plan regarding containerized pacemaker?
> 
> Well, we kind of started evaluating it (there was definitely not enough
> time around pike/queens as we were busy landing the bundles code), then
> due to discussions around k8s it kind of got off our radar. We can
> at least resume the discussions around it and see how much effort it
> would be. I'll bring it up with my team and get back to you.
> 
> cheers,
> Michele
> -- 
> Michele Baldessari
> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> -- 
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Zun UI questions

2018-07-17 Thread Amy Marrich
Hongbin,

I just wanted to follow-up and let a fresh install of the most recent
release is working with no errors.

Thanks so much for your assistance!

Amy (spotz)

On Sun, Jul 15, 2018 at 10:49 AM, Hongbin Lu  wrote:

> Hi Amy,
>
> The wrong Keystone URI might be due to the an issue of the devstack
> plugins. I have proposed fixes [1] [2] for that. Thanks for the suggestion
> about adding a note for uninstalling pip packages. I have created a ticket
> [3] for that.
>
> [1] https://review.openstack.org/#/c/582799/
> [2] https://review.openstack.org/#/c/582800/
> [3] https://bugs.launchpad.net/zun/+bug/1781807
>
> Best regards,
> Hongbin
>
> On Sun, Jul 15, 2018 at 10:16 AM Amy Marrich  wrote:
>
>> Hongbin,
>>
>> Doing the pip uninstall did the trick with the Flask version, when
>> running another debug I did notice an incorrect IP for the Keystone URI and
>> have restarted the machines networking and cleaned up the /etc/hosts.
>>
>> When doing a second stack, I did need to uninstall the pip packages again
>> for the second stack.sh to complete, might be worth adding this to the docs
>> as a note if people have issues. Second install still had the wrong IP
>> showing as the Keystone URI, I'll try s fresh machine install next.
>>
>> Thanks for all your help!
>>
>> Amy (spotz)
>>
>> On Sat, Jul 14, 2018 at 9:42 PM, Hongbin Lu  wrote:
>>
>>> Hi Amy,
>>>
>>> Today, I created a fresh VM with Ubuntu16.04 and run ./stack.sh with
>>> your local.conf, but I couldn't reproduce the two issues you mentioned (the
>>> Flask version conflict issue and 401 issue). By analyzing the logs you
>>> provided, it seems some python packages in your machine are pretty old.
>>> First, could you paste me the output of "pip freeze". Second, if possible,
>>> I would suggest to remove all the python packages and re-stack again as
>>> following:
>>>
>>> * Run ./unstack
>>> * Run ./clean.sh
>>> * Run  pip freeze | grep -v '^\-e' | xargs sudo pip uninstall -y
>>> * Run ./stack
>>>
>>> Please let us know if above steps still don't work.
>>>
>>> Best regards,
>>> Hongbin
>>>
>>> On Sat, Jul 14, 2018 at 5:15 PM Amy Marrich  wrote:
>>>
 Hongbin,

 This was a fresh install from master this week

 commit 6312db47e9141acd33142ae857bdeeb92c59994e

 Merge: ef35713 2742875

 Author: Zuul 

 Date:   Wed Jul 11 20:36:12 2018 +


 Merge "Cleanup keystone's removed config options"

 Except for builds with my patching kuryr-libnetwork locally builds have
 been done with reclone and fresh /opt/stack directories. Patch has been
 submitted for the Flask issue

 https://review.openstack.org/582634 but hasn't passed the gates yet.


 Following the instructions above on a new pull of devstack:

 commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0

 Author: OpenStack Proposal Bot 

 Date:   Thu Jul 12 06:17:32 2018 +

 Updated from generate-devstack-plugins-list

 Change-Id: I8f702373c76953a0a29285f410d368c975ba4024


 I'm still able to use the openstack CLI for non-Zun commands but 401 on
 Zun

 root@zunui:~# openstack service list

 +--+--+-
 -+

 | ID   | Name | Type
   |

 +--+--+-
 -+

 | 06be414af2fd4d59af8de0ccff78149e | placement| placement
   |

 | 0df1832d6f8c4a5aa7b5e8bacf7339f8 | nova | compute
   |

 | 3f1b2692a184443c85b631fa7acf714d | heat-cfn | cloudformation
 |

 | 3f6bcbb75f684041bf6eeaaf5ab4c14b | cinder   | block-storage
   |

 | 6e06ac1394ee4872aa134081d190f18e | neutron  | network
   |

 | 76afda8ecd18474ba382dbb4dc22b4bb | kuryr-libnetwork |
 kuryr-libnetwork |

 | 7b336b8b9b9c4f6bbcc5fa6b9400ccaf | cinderv3 | volumev3
   |

 | a0f83f30276d45e2bd5fd14ff8410380 | nova_legacy  | compute_legacy
 |

 | a12600a2467141ff89a406ec3b50bacb | cinderv2 | volumev2
   |

 | d5bfb92a244b4e7888cae28ca6b2bbac | keystone | identity
   |

 | d9ea196e9cae4b0691f6c4b619eb47c9 | zun  | container
   |

 | e528282e291f4ddbaaac6d6c82a0036e | cinder   | volume
   |

 | e6078b2c01184f88a784b390f0b28263 | glance   | image
   |

 | e650be6c67ac4e5c812f2a4e4cca2544 | heat | orchestration
   |

 +--+--+-
 -+

 root@zunui:~# openstack appcontainer list

 Unauthorized (HTTP 401) (Request-ID: req-e44f5caf-642c-4435-ab1d-
 98feae1fada9)

 root@zunui:~# zun list

 ERROR: Unauthorized (HTTP 401) (Request-ID: 

Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-07-17 Thread Ian Y. Choi

Hello,

When I saw overall translation source strings on container whitepaper, I 
would infer that new edge computing whitepaper
source strings would include HTML markup tags. On the other hand, the 
source strings of edge computing whitepaper
which I18n team previously translated do not include HTML markup tags, 
since the source strings are based on just text format.


I really appreciate Akihiro's work on RST-based support on publishing 
translated edge computing whitepapers, since
translators do not have to re-translate all the strings. On the other 
hand, it seems that I18n team needs to investigate on
translating similar strings of HTML-based edge computing whitepaper 
source strings, which would discourage translators.


That's my point of view on translating edge computing whitepaper.

For translating container whitepaper, I want to further ask the 
followings since *I18n-based tools*
would mean for translators that translators can test and publish 
translated whitepapers locally:


- How to build translated container whitepaper using original 
Silverstripe-based repository?
  https://docs.openstack.org/i18n/latest/tools.html describes well how 
to build translated artifacts for RST-based OpenStack repositories
  but I could not find the way how to build translated container 
whitepaper with translated resources on Zanata.



With many thanks,

/Ian

Jimmy McArthur wrote on 7/17/2018 11:01 PM:

Frank,

I'm sorry to hear about the displeasure around the Edge paper.  As 
mentioned in a prior thread, the RST format that Akihiro worked did 
not work with the  Zanata process that we have been using with our 
CMS.  Additionally, the existing EDGE page is a PDF, so we had to 
build a new template to work with the new HTML whitepaper layout we 
created for the Containers paper. I outlined this in the thread " 
[OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing 
Whitepaper Translation" on 6/25/18 and mentioned we would be ready 
with the template around 7/13.


We completed the work on the new whitepaper template and then put out 
the pot files on Zanata so we can get the po language files back. If 
this process is too cumbersome for the translation team, I'm open to 
discussion, but right now our entire translation process is based on 
the official OpenStack Docs translation process outlined by the i18n 
team: https://docs.openstack.org/i18n/latest/en_GB/tools.html


Again, I realize Akihiro put in some work on his own proposing the new 
translation type. If the i18n team is moving to this format instead, 
we can work on redoing our process.


Please let me know if I can clarify further.

Thanks,
Jimmy

Frank Kloeker wrote:

Hi Jimmy,

permission was added for you and Sebastian. The Container Whitepaper 
is on the Zanata frontpage now. But we removed Edge Computing 
whitepaper last week because there is a kind of displeasure in the 
team since the results of translation are still not published beside 
Chinese version. It would be nice if we have a commitment from the 
Foundation that results are published in a specific timeframe. This 
includes your requirements until the translation should be available.


thx Frank

Am 2018-07-16 17:26, schrieb Jimmy McArthur:

Sorry, I should have also added... we additionally need permissions so
that we can add the a new version of the pot file to this project:
https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 



Thanks!
Jimmy



Jimmy McArthur wrote:

Hi all -

We have both of the current whitepapers up and available for 
translation.  Can we promote these on the Zanata homepage?


https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 
https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 
Thanks all!

Jimmy



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How to look up a project name from Neutron server code?

2018-07-17 Thread Jay Pipes

On 07/17/2018 03:36 AM, Neil Jerram wrote:
Can someone help me with how to look up a project name (aka tenant name) 
for a known project/tenant ID, from code (specifically a mechanism 
driver) running in the Neutron server?


I believe that means I need to make a GET REST call as here: 
https://developer.openstack.org/api-ref/identity/v3/index.html#projects.  But 
I don't yet understand how a piece of Neutron server code can ensure 
that it has the right credentials to do that.  If someone happens to 
have actual code for doing this, I'm sure that would be very helpful.


(I'm aware that whenever the Neutron server processes an API request, 
the project name for the project that generated that request is added 
into the request context.  That is great when my code is running in an 
API request context.  But there are other times when the code isn't in a 
request context and still needs to map from a project ID to project 
name; hence the question here.)


Hi Neil,

You basically answered your own question above :) The neutron request 
context gets built from oslo.context's Context.from_environ() [1] which 
has this note in the implementation [2]:


# Load a new context object from the environment variables set by
# auth_token middleware. See:
# 
https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service


So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME. 
If you don't have access to a HTTP headers, then you'll need to pass 
some context object/struct to the code you're referring to. Might as 
well pass the neutron RequestContext (derived from oslo_context.Context) 
to the code you're referring to and you get all this for free.


Best,
-jay

[1] 
https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424


[2] 
https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Thierry Carrez

Finally found the time to properly read this...

Zane Bitter wrote:

[...]
We chose to add features to Nova to compete with vCenter/oVirt, and not 
to add features the would have enabled OpenStack as a whole to compete 
with more than just the compute provisioning subset of EC2/Azure/GCP.


Could you give an example of an EC2 action that would be beyond the 
"compute provisioning subset" that you think we should have built into 
Nova ?


Meanwhile, the other projects in OpenStack were working on building the 
other parts of an AWS/Azure/GCP competitor. And our vague one-sentence 
mission statement allowed us all to maintain the delusion that we were 
all working on the same thing and pulling in the same direction, when in 
truth we haven't been at all.


Do you think that organizing (tying) our APIs along [micro]services, 
rather than building a sanely-organized user API on top of a 
sanely-organized set of microservices, played a role in that divide ?


We can decide that we want to be one, or the other, or both. But if we 
don't all decide together then a lot of us are going to continue wasting 
our time working at cross-purposes.


If you are saying that we should choose between being vCenter or AWS, I 
would definitely say the latter. But I'm still not sure I see this issue 
in such a binary manner.


Imagine if (as suggested above) we refactored the compute node and give 
it a user API, would that be one, the other, both ? Or just a sane 
addition to improve what OpenStack really is today: a set of open 
infrastructure components providing different services with each their 
API, with slight gaps and overlaps between them ?


Personally, I'm not very interested in discussing what OpenStack could 
have been if we started building it today. I'm much more interested in 
discussing what to add or change in order to make it usable for more use 
cases while continuing to serve the needs of our existing users. And I'm 
not convinced that's an either/or choice...


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-07-17 Thread Jimmy McArthur

Frank,

I'm sorry to hear about the displeasure around the Edge paper.  As 
mentioned in a prior thread, the RST format that Akihiro worked did not 
work with the  Zanata process that we have been using with our CMS.  
Additionally, the existing EDGE page is a PDF, so we had to build a new 
template to work with the new HTML whitepaper layout we created for the 
Containers paper. I outlined this in the thread " [OpenStack-I18n] 
[Edge-computing] [Openstack-sigs] Edge Computing Whitepaper Translation" 
on 6/25/18 and mentioned we would be ready with the template around 7/13.


We completed the work on the new whitepaper template and then put out 
the pot files on Zanata so we can get the po language files back. If 
this process is too cumbersome for the translation team, I'm open to 
discussion, but right now our entire translation process is based on the 
official OpenStack Docs translation process outlined by the i18n team: 
https://docs.openstack.org/i18n/latest/en_GB/tools.html


Again, I realize Akihiro put in some work on his own proposing the new 
translation type. If the i18n team is moving to this format instead, we 
can work on redoing our process.


Please let me know if I can clarify further.

Thanks,
Jimmy

Frank Kloeker wrote:

Hi Jimmy,

permission was added for you and Sebastian. The Container Whitepaper 
is on the Zanata frontpage now. But we removed Edge Computing 
whitepaper last week because there is a kind of displeasure in the 
team since the results of translation are still not published beside 
Chinese version. It would be nice if we have a commitment from the 
Foundation that results are published in a specific timeframe. This 
includes your requirements until the translation should be available.


thx Frank

Am 2018-07-16 17:26, schrieb Jimmy McArthur:

Sorry, I should have also added... we additionally need permissions so
that we can add the a new version of the pot file to this project:
https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 



Thanks!
Jimmy



Jimmy McArthur wrote:

Hi all -

We have both of the current whitepapers up and available for 
translation.  Can we promote these on the Zanata homepage?


https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 
https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 
Thanks all!

Jimmy



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Summit Berlin CFP Deadline Today

2018-07-17 Thread Ashlee Ferguson
Hi everyone,

The CFP for the OpenStack Summit Berlin closes July 17 at 11:59pm PST (July 18 
at 6:59am UTC), so make sure to press submit 
 on your 
talks for:

• CI/CD
• Container Infrastructure
• Edge Computing
• Hands-on Workshops
• HPC / GPU / AI
• Open Source Community
• Private & Hybrid Cloud
• Public Cloud
• Telecom & NFV


SUBMIT HERE 



Register for the Summit 

 - Early Bird pricing ends August 21

Become a Sponsor 

If you have any questions, please email sum...@openstack.org 
.

Cheers,
Ashlee


Ashlee Ferguson
OpenStack Foundation
ash...@openstack.org




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Lingxian Kong
Huge +1


Cheers,
Lingxian Kong

On Tue, Jul 17, 2018 at 7:04 PM, Yatin Karel  wrote:

> +2 Well deserved.
>
> Welcome Feilong and Thanks for all the Great Work!!!
>
>
> Regards
> Yatin Karel
>
> On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis 
> wrote:
> > Hello list,
> >
> > I'm excited to nominate Feilong as Core Reviewer for the Magnum project.
> >
> > Feilong has contributed many features like Calico as an alternative CNI
> for
> > kubernetes, make coredns scale proportionally to the cluster, improved
> > admin operations on clusters and improved multi-master deployments. Apart
> > from contributing to the project he has been contributing to other
> projects
> > like gophercloud and shade, he has been very helpful with code reviews
> > and he tests and reviews all patches that are coming in. Finally, he is
> very
> > responsive on IRC and in the ML.
> >
> > Thanks for all your contributions Feilong, I'm looking forward to working
> > with
> > you more!
> >
> > Cheers,
> > Spyros
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
yes
 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 05:00 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver

 
Do you use the volumes on the same nodes where instances are located?

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 11:52 AM, Rambo  wrote:
yes,My cinder driver is  LVM+LIO.I have upload the test result in  appendix.Can 
you show me your test results?Thank you!



 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 04:09 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver



 
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance 
comparing to BlockDeviceDriver,


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 





__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

D7C81B68@5B350B78.B0B54D5B
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-07-17 Thread Frank Kloeker

Hi Jimmy,

permission was added for you and Sebastian. The Container Whitepaper is 
on the Zanata frontpage now. But we removed Edge Computing whitepaper 
last week because there is a kind of displeasure in the team since the 
results of translation are still not published beside Chinese version. 
It would be nice if we have a commitment from the Foundation that 
results are published in a specific timeframe. This includes your 
requirements until the translation should be available.


thx Frank

Am 2018-07-16 17:26, schrieb Jimmy McArthur:

Sorry, I should have also added... we additionally need permissions so
that we can add the a new version of the pot file to this project:
https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835

Thanks!
Jimmy



Jimmy McArthur wrote:

Hi all -

We have both of the current whitepapers up and available for 
translation.  Can we promote these on the Zanata homepage?


https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 
https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 
Thanks all!

Jimmy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Ivan Kolodyazhny
Do you use the volumes on the same nodes where instances are located?

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 17, 2018 at 11:52 AM, Rambo  wrote:

> yes,My cinder driver is  LVM+LIO.I have upload the test result in
> appendix.Can you show me your test results?Thank you!
>
>
>
> -- Original --
> *From: * "Ivan Kolodyazhny";
> *Date: * Tue, Jul 17, 2018 04:09 PM
> *To: * "OpenStack Developmen";
> *Subject: * Re: [openstack-dev] [cinder] about block device driver
>
> Rambo,
>
> Did you try to use LVM+LIO target driver? It shows pretty good performance
> comparing to BlockDeviceDriver,
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
>
>> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is
>> not a viable option - benchmarked them several times, unsatisfactory
>> results.Sometimes it's IOPS is twice as bad,could you show me your test
>> data?Thank you!
>>
>>
>>
>> Cheers,
>> Rambo
>>
>>
>> -- Original --
>> *From:* "Sean McGinnis";
>> *Date:* 2018年7月16日(星期一) 晚上9:32
>> *To:* "OpenStack Developmen";
>> *Subject:* Re: [openstack-dev] [cinder] about block device driver
>>
>> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
>> > On 16/07, Rambo wrote:
>> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any
>> other solution for data processing scenarios.Does the community will agree
>> to merge the BlockDeviceDriver to the Cinder repository again if our
>> company hold the maintainer and CI?
>> > >
>> >
>> > Hi,
>> >
>> > I'm sure the community will be happy to merge the driver back into the
>> > repository.
>> >
>>
>> The other reason for its removal was its inability to meet the minimum
>> feature
>> set required for Cinder drivers along with benchmarks showing the LVM and
>> iSCSI
>> driver could be tweaked to have similar or better performance.
>>
>> The other option would be to not use Cinder volumes so you just use local
>> storage on your compute nodes.
>>
>> Readding the block device driver is not likely an option.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


99BB7964@8738C509.52AE4D5B.png
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
yes,My cinder driver is  LVM+LIO.I have upload the test result in  appendix.Can 
you show me your test results?Thank you!



 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 04:09 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver

 
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance 
comparing to BlockDeviceDriver,


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

99BB7964@8738C509.52AE4D5B.png
Description: Binary data


test2639.png
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Ivan Kolodyazhny
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance
comparing to BlockDeviceDriver,

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:

> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is
> not a viable option - benchmarked them several times, unsatisfactory
> results.Sometimes it's IOPS is twice as bad,could you show me your test
> data?Thank you!
>
>
>
> Cheers,
> Rambo
>
>
> -- Original --
> *From:* "Sean McGinnis";
> *Date:* 2018年7月16日(星期一) 晚上9:32
> *To:* "OpenStack Developmen";
> *Subject:* Re: [openstack-dev] [cinder] about block device driver
>
> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> > On 16/07, Rambo wrote:
> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any
> other solution for data processing scenarios.Does the community will agree
> to merge the BlockDeviceDriver to the Cinder repository again if our
> company hold the maintainer and CI?
> > >
> >
> > Hi,
> >
> > I'm sure the community will be happy to merge the driver back into the
> > repository.
> >
>
> The other reason for its removal was its inability to meet the minimum
> feature
> set required for Cinder drivers along with benchmarks showing the LVM and
> iSCSI
> driver could be tweaked to have similar or better performance.
>
> The other option would be to not use Cinder volumes so you just use local
> storage on your compute nodes.
>
> Readding the block device driver is not likely an option.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud"

2018-07-17 Thread Udi Kalifon
I opened this RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1601739


Regards,
Udi Kalifon; Senior QE; RHOS-UI Automation


On Tue, Jul 17, 2018 at 8:42 AM, Cédric Jeanneret 
wrote:

>
>
> On 07/17/2018 06:57 AM, Udi Kalifon wrote:
> > We should also add support for the openstack client to launch the other
> > validators that are used in the GUI. There are validators for the
> > overcloud as well, and new validators are added all the time.
> >
> > These validators are installed under
> > /usr/share/openstack-tripleo-validations/validations/ and they're
> > launched by the command:
> > ansible-playbook -i /usr/bin/tripleo-ansible-inventory
> > /usr/share/openstack-tripleo-validations/validations/<<
> validator-name.py>>
>
> Hey, funky - I'm currently adding the support for ansible-playbook (in
> an "easy, fast and pre-step" way) to the tripleoclient in order to be
> able to run validations from that very same location:
> https://review.openstack.org/582917
>
> Guess we're on the same track :).
>
> >
> > Cedric, feel free to open an RFE.
>
> Will do once we have the full scope :).
>
> >
> >
> >
> >
> > Regards,
> > Udi Kalifon; Senior QE; RHOS-UIAutomation
> >
> >
> > On Mon, Jul 16, 2018 at 6:32 PM, Dan Prince  > > wrote:
> >
> > On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret
> > mailto:cjean...@redhat.com>> wrote:
> > >
> > > Dear Stackers,
> > >
> > > In order to let operators properly validate their undercloud node,
> I
> > > propose to create a new subcommand in the "openstack undercloud"
> "tree":
> > > `openstack undercloud validate'
> > >
> > > This should only run the different validations we have in the
> > > undercloud_preflight.py¹
> > > That way, an operator will be able to ensure all is valid before
> > > starting "for real" any other command like "install" or "upgrade".
> > >
> > > Of course, this "validate" step is embedded in the "install" and
> > > "upgrade" already, but having the capability to just validate
> without
> > > any further action is something that can be interesting, for
> example:
> > >
> > > - ensure the current undercloud hardware/vm is sufficient for an
> update
> > > - ensure the allocated VM for the undercloud is sufficient for a
> deploy
> > > - and so on
> > >
> > > There are probably other possibilities, if we extend the
> "validation"
> > > scope outside the "undercloud" (like, tripleo, allinone, even
> overcloud).
> > >
> > > What do you think? Any pros/cons/thoughts?
> >
> > I think this command could be very useful. I'm assuming the
> underlying
> > implementation would call a 'heat stack-validate' using an ephemeral
> > heat-all instance. If so way we implement it for the undercloud vs
> the
> > 'standalone' use case would likely be a bit different. We can
> probably
> > subclass the implementations to share common code across the efforts
> > though.
> >
> > For the undercloud you are likely to have a few extra 'local only'
> > validations. Perhaps extra checks for things on the client side.
> >
> > For the all-in-one I had envisioned using the output from the 'heat
> > stack-validate' to create a sample config file for a custom set of
> > services. Similar to how tools like Packstack generate a config file
> > for example.
> >
> > Dan
> >
> > >
> > > Cheers,
> > >
> > > C.
> > >
> > >
> > >
> > > ¹
> > > http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/
> tripleoclient/v1/undercloud_preflight.py
> >  tripleoclient/v1/undercloud_preflight.py>
> > > --
> > > Cédric Jeanneret
> > > Software Engineer
> > > DFG:DF
> > >
> > >
> > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >  unsubscribe>
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >  unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> >
> >
>
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
>
> __
> OpenStack 

[openstack-dev] [neutron] How to look up a project name from Neutron server code?

2018-07-17 Thread Neil Jerram
Can someone help me with how to look up a project name (aka tenant name)
for a known project/tenant ID, from code (specifically a mechanism driver)
running in the Neutron server?

I believe that means I need to make a GET REST call as here:
https://developer.openstack.org/api-ref/identity/v3/index.html#projects.
But I don't yet understand how a piece of Neutron server code can ensure
that it has the right credentials to do that.  If someone happens to have
actual code for doing this, I'm sure that would be very helpful.

(I'm aware that whenever the Neutron server processes an API request, the
project name for the project that generated that request is added into the
request context.  That is great when my code is running in an API request
context.  But there are other times when the code isn't in a request
context and still needs to map from a project ID to project name; hence the
question here.)

Many thanks,
 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Yatin Karel
+2 Well deserved.

Welcome Feilong and Thanks for all the Great Work!!!


Regards
Yatin Karel

On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis  wrote:
> Hello list,
>
> I'm excited to nominate Feilong as Core Reviewer for the Magnum project.
>
> Feilong has contributed many features like Calico as an alternative CNI for
> kubernetes, make coredns scale proportionally to the cluster, improved
> admin operations on clusters and improved multi-master deployments. Apart
> from contributing to the project he has been contributing to other projects
> like gophercloud and shade, he has been very helpful with code reviews
> and he tests and reviews all patches that are coming in. Finally, he is very
> responsive on IRC and in the ML.
>
> Thanks for all your contributions Feilong, I'm looking forward to working
> with
> you more!
>
> Cheers,
> Spyros
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Spyros Trigazis
Hello list,

I'm excited to nominate Feilong as Core Reviewer for the Magnum project.

Feilong has contributed many features like Calico as an alternative CNI for
kubernetes, make coredns scale proportionally to the cluster, improved
admin operations on clusters and improved multi-master deployments. Apart
from contributing to the project he has been contributing to other projects
like gophercloud and shade, he has been very helpful with code reviews
and he tests and reviews all patches that are coming in. Finally, he is very
responsive on IRC and in the ML.

Thanks for all your contributions Feilong, I'm looking forward to working
with
you more!

Cheers,
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev