Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Zane Bitter

I'm not Kevin but I think I can clarify some of these.

On 03/07/18 16:04, Jay Pipes wrote:
On 07/03/2018 02:37 PM, Fox, Kevin M wrote: 
 So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware?


"production VM" use case like oVirt or VMWare? I don't know what that means. You mean 
"a GUI-based VM management system"?


Read 'pets'.

While some people only ever consider running Kubernetes on top of a 
cloud, some of us realize maintaining both a cloud an a kubernetes is 
unnecessary and can greatly simplify things simply by running k8s on 
bare metal. This does then make it a competitor to Nova  as a platform 
for running workload on.


What percentage of Kubernetes users deploy on baremetal (and continue to 
deploy on baremetal in production as opposed to just toying around with 
it)?


At Red Hat Summit there was a demo of deploying OpenShift alongside (not 
on top of) OpenStack on bare metal using Director (downstream of TripleO 
- so managed by Ironic in an OpenStack undercloud).


I don't know if people using Kubernetes directly on baremetal in 
production is widespread right now, but it's clear to me that it will be 
just around the corner.


As k8s gains more multitenancy features, this trend will continue to 
grow I think. OpenStack needs to be ready for when that becomes a thing.


OpenStack is already multi-tenant, being designed as such from day one. 
With the exception of Ironic, which uses Nova to enable multi-tenancy.


What specifically are you referring to "OpenStack needs to be ready"? 
Also, what specific parts of OpenStack are you referring to there?


I believe the point was:

* OpenStack supports multitenancy.
* Kubernetes does not support multitenancy.
* Applications that require multitenancy currently require separate 
per-tenant deployments of Kubernetes; deploying on top of a cloud (such 
as OpenStack) makes this easier, so there is demand for OpenStack from 
people who need multitenancy even if they are mainly interacting with 
Kubernetes. Effectively OpenStack is the multitenancy layer for k8s in a 
lot of deployments.

* One day Kubernetes will support multitenancy.
* Then what?


Think of OpenStack like a game console. The moment you make a component 
optional and make it takes extra effort to obtain, few software developers 
target it and rarely does anyone one buy the addons it because there isn't 
software for it. Right now, just about everything in OpenStack is an addon. 
Thats a problem.


I don't have any game consoles nor do I develop software for them,


Me neither, but much like OpenStack it's a two-sided marketplace 
(developers and users in the console case, operators and end-users in 
the OpenStack case), where you succeed or fail based on how much value 
you can get flowing between the two sides. There's a positive feedback 
loop between supply on one side and demand on the other, so like all 
positive feedback loops it's unstable and you have to find some way to 
bootstrap it in the right direction, which is hard. One way to make it 
much, much harder is to segment your market such a way that you give 
yourself a second feedback loop that you also have to bootstrap, that 
depends on the first one, and you only get to use a subset of your 
existing market participants to do it.


As an example from my other reply, we're probably going to try to use 
Barbican to help integrate Heat with external tools like k8s and 
Ansible, but for that to have any impact we'll have to convince users 
that they want to do this badly enough that they'll convince their 
operators to deploy Barbican - and we'll likely have to do so before 
they've even tried it. That's even after we've already convinced them to 
use OpenStack and deploy Heat. If Barbican (and Heat) were available as 
part of every OpenStack deployment, then it'd just be a matter of 
convincing people to use the feature, which would already be available 
and which they could try out at any time. That's a much lower bar.


I'm not defending "make it a monolith" as a solution, but Kevin is 
identifying a real problem.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD?

2018-07-06 Thread Lingxian Kong
Hi Barbican guys,

Currently, I am testing the integration between Barbican and SoftHSM v2 but
I met with a problem that SoftHSM v2 doesn't support CKM_AES_CBC_PAD key
wrapping operation which is hardcoded in Barbican code here
https://github.com/openstack/barbican/blob/5dea5cec130b59ecfb8d46435cd7eb3212894b4c/barbican/plugin/crypto/pkcs11.py#L496.
After discussion with SoftHSM team, I was told SoftHSM does support other
mechanisms such as CKM_AES_KEY_WRAP, CKM_AES_KEY_WRAP_PAD, CKM_RSA_PKCS, or
CKM_RSA_PKCS_OAEP.

My question is, is it easy to support other wrapping mechanisms in
Barbican? Or if there is another workaround this problem?

Cheers,
Lingxian Kong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [First Contact] [SIG] [PTL] Project Liaisons

2018-07-06 Thread Kendall Nelson
Hello again!

I updated the Project Liaisons list [1] with PTL's I didn't hear from about
delegating the duties to a different person.

If you want to delegate this or add other people willing to be contacted by
new contributors, please let me know and I would be happy to update the
list :)

It would also be nice to fill in timezones for new contributors looking
over the list so they know when might be the best time to contact you.

Thanks!

-Kendall Nelson (diablo_rojo)

[1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons

On Wed, Jun 6, 2018 at 3:00 PM Kendall Nelson  wrote:

> Hello!
>
> As you hopefully are aware the First Contact SIG strives to provide a
> place for new contributors to come for information and advice. Part of this
> is helping new contributors find more established contributors in the
> community they can ask for help from. While the group of people involved in
> the FC SIG is diverse in project knowledge, we don't have all of them
> covered.
>
> Over the last year we have built a list of Project Liaisons to refer new
> contributors to when the project they are interested in isn't one we know
> well. Unfortunately, this list[1] isn't as filled out as we would like it
> to be.
>
> So! Keeping with the conventions of other liaison roles, if there isn't
> already a project liaison named, this role will default to the PTL unless
> you respond to this thread with the individual you are delegating to :)  Or
> by adding them to the list in the wiki[1].
>
> Essentially the duties of the liaison are just to be willing to help out
> newcomers when a FC SIG member introduces you to them and to keep an eye
> out for patches that come in to your project with the 'Welcome, new
> contributor' bot message. Its likely you are doing this already, but to
> have a defined list of people to refer to would be a huge help.
>
> Thank you!
>
> -Kendall Nelson (diablo_rojo)
>
> [1]https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]API update week 28-4

2018-07-06 Thread Matt Riedemann

On 7/4/2018 6:10 AM, Ghanshyam Mann wrote:

Planned Features :
==
Below are the API related features for Rocky cycle. Nova API Sub team will start reviewing those to give their regular feedback. If anythings missing there feel free to add those in etherpad-https://etherpad.openstack.org/p/rocky-nova-priorities-tracking  


Oh yeah, getting agreement on the direction of the "handling a down 
cell" spec is going to be important and we don't have much time to get 
this done now either (~3 weeks).


https://review.openstack.org/#/c/557369/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]API update week 28-4

2018-07-06 Thread Matt Riedemann
Thanks for sending out this update, it's useful for me when I'm not 
attending the office hours.


On 7/4/2018 6:10 AM, Ghanshyam Mann wrote:

1. Servers Ips non-unique network names :
  
-https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
  - Spec Update need another +2 -https://review.openstack.org/#/c/558125/
  -https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)   
  - Weekly Progress: On Hold. Waiting for spec update to merge first.


I've poked dansmith to approve this. However, the author (or someone) 
should start working on the code changes since it's a pretty 
straight-forward change and we don't have much time left in the cycle 
for this.




2. Abort live migration in queued state:
 
-https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
 -https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
 - Weekly Progress: Code is up for review. No Review last week.




This is in the runways queue so it should be coming up in the next slot.


3. Complex anti-affinity policies:
 -https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies
 -https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)   
 - Weekly Progress: Code is up for review. Few reviews done .


This is currently in a runways slot and has had quite a bit of review 
this week. It's not going to be done this week but hopefully we can get 
it all merged next week (there aren't any major issues that I foresee at 
this point after having gone through the full series yesterday).




4. Volume multiattach enhancements:
 
-https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements
 -https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)   
 - Weekly Progress: Waiting to hear from mriedem about his WIP on base patch -https://review.openstack.org/#/c/569649/3


Yeah this is on my TODO list. I was waiting for the spec to merge before 
starting in on the API changes, and have just been busy with other 
stuff. I'll try to get the API changes done next week.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-06 Thread Matt Riedemann

On 7/6/2018 6:28 AM, Kristi Nikolla wrote:

If the answer is 'no', can we find a process that gets us there? Or
are we doomed
by the inability to version the version document?


We could always microversion the version document couldn't we? Not 
saying we want to, but it's an option right?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2018-07-06 Thread Matt Riedemann

On 5/23/2017 7:12 PM, Michael Glasgow wrote:


A slight disadvantage of this approach is that the resulting 
incongruence between the client and the API is obfuscating.  When an end 
user can make accurate inferences about the API based on how the client 
works, that's a form of transparency that can pay dividends.


Also in terms of the "slippery slope" that has been raised, putting 
small bits of orchestration into the client creates a grey area there as 
well:  how much is too much?


OTOH I don't disagree with you.  This approach might be the best of 
several not-so-great options, but I wish I could think of a better one.


Just an FYI that this same 'pass volume type when booting from volume' 
request came up again today:


https://review.openstack.org/#/c/579520/

We might want to talk about this again at the Stein PTG in September to 
see if the benefit of adding this for people outweighs the orchestration 
cost since it seems it's never going to go away and lots of deployments 
are already patching it in.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Jay Pipes

On 07/06/2018 12:58 PM, Zane Bitter wrote:

On 02/07/18 19:13, Jay Pipes wrote:

Nova's primary competition is:

* Stand-alone Ironic
* oVirt and stand-alone virsh callers
* Parts of VMWare vCenter [3]
* MaaS in some respects


Do you see KubeVirt or Kata or Virtlet or RancherVM ending up on this 
list at any point? Because a lot of people* do.

>

* https://news.ycombinator.com/item?id=17013779


Please don't lose credibility by saying "a lot of people" see things 
like RancherVM as competitors to Nova [1] by pointing to a HackerNews 
[2] thread where two people discuss why RancherVM exists and where one 
of those people is Darren Shepherd, a co-founder of Rancher, previously 
at Citrix and GoDaddy with a long-known distaste for all things OpenStack.


I don't think that thread is particularly unbiased or helpful.

I'll respond to the rest of your (excellent) points a little later...

Best,
-jay

[1] Nova isn't actually mentioned there. "OpenStack" is.

[2] I've often wondered who has time to actually respond to anything on 
HackerNews. Same for when Slashdot was a thing. In fact, now that I 
think about it, I spend entirely too much time worrying about all of 
this stuff... ;)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-06 Thread Ben Nemec

(adding the list back)

On 07/06/2018 12:05 PM, Dan Prince wrote:

On Fri, Jul 6, 2018 at 12:03 PM Ben Nemec  wrote:




On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to
support multiple deployment tools in parallel (originally
tripleo-image-elements and puppet).  Since I think it has become clear
that it's impractical to maintain two different technologies to do
essentially the same thing I'm not sure there's a need for it now.  It's
also worth noting that kolla-kubernetes basically died because there
wasn't enough people to maintain both deployment methods, so we're not
the only ones who have found that to be true.  If/when we move to
kubernetes I would anticipate it going like the initial containers work
did - development for a couple of cycles, then a switch to the new thing
and deprecation of the old thing, then removal of support for the old thing.


Sometimes the old things are a bit longer lived though. And sometimes
the new thing doesn't work out the way you thought they would. Have an
abstraction layer where you can have more than new/old things is
sometimes very useful. I'd had to see us ditch it. Especially since
you can already sort of have the both right now by using the resource
registry files to setup a nice default for everything and gradually
switch to new stuff as your defaults.


I don't know that you lose that ability in either case though.  You can 
still point your resource registry at the -puppet versions of the 
services if you want to do that.  The only thing that changes is the 
location of the files.


Given that, I don't think there's actually a _huge_ difference between 
the two options.  I prefer the flat directory just because as I've been 
working on designate it's mildly annoying to have to navigate two 
separate directory trees to find all the designate-related service 
files, but I realize that's a fairly minor complaint. :-)






That being said, because of the fact that the service yamls are
essentially an API for TripleO because they're referenced in user
resource registries, I'm not sure it's worth the churn to move
everything either.  I think that's going to be an issue either way
though, it's just a question of the scope.  _Something_ is going to move
around no matter how we reorganize so it's a problem that needs to be
addressed anyway.


I feel like renaming every service template in t-h-t as part of
solving my initial concerns around identifying the 'ansible configured
services' is a bit of a sedge hammer though. I like some of the
renaming ideas proposed here too. I'm just not convinced that renaming
*some* templates is the same as restructuring the entire t-h-t
services hierarchy. I'd rather wait and let it happen more naturally I
guess, perhaps when we need to do something more destructive already.


My thought was that either way we're causing people grief because they 
have to update their files, but the big bang approach would mean they do 
it once and then it's done.  Except I realize now that's not true, 
because as more things move to ansible the filenames would continue to 
change.


Which makes me wonder if we should be encoding implementation details 
into the filenames in the first place.  Ideally, the interface would be 
"I want designate-api, so I set OS::TripleO::Services::DesignateApi: 
services/designate-api.yaml".  As a user I probably don't care what 
technology is used to deploy it, I just want it deployed.  Then if/when 
we change our default method, it just gets swapped out seamlessly and 
there's no need for me to change my configuration.


Obviously we'd still need the ability to have method-specific templates 
too, but maybe the default designate-api.yaml could be a symlink to 
whatever we consider the primary one.  Not 

[openstack-dev] [cinder] Planning Etherpad for Denver 2018 PTG

2018-07-06 Thread Jay S Bryant

All,

I have created an etherpad to start planning for the Denver PTG in 
September. [1]  Please start adding topics to the etherpad.


Look forward to seeing you all there!

Jay

(jungleboyj)

[1] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deprecation notice: Cinder Driver for NetApp E-Series

2018-07-06 Thread Gavioli, Luiz
Developers and Operators,

NetApp’s various Cinder drivers currently provide platform integration for 
ONTAP powered systems,

SolidFire, and E/EF-Series systems. Per systems-provided telemetry and 
discussion amongst our user

community, we’ve learned that when E/EF-series systems are deployed with 
OpenStack they do not

commonly make use of the platform specific Cinder driver (instead opting for 
use of the LVM

driver or Ceph layered atop). Given that, we’re proposing to cease further 
development and maintenance

of the E-Series drivers within OpenStack and will focus development on our 
widely used SolidFire and ONTAP options.

In accordance with community policy [1], we are initiating the deprecation 
process for the NetApp E-Series

drivers [2] set to conclude with their removal in the OpenStack Stein release. 
This will apply to both

protocols currently supported in this driver: iSCSI and FC.

What is being deprecated: Cinder drivers for NetApp E-Series

Period of deprecation: E-Series drivers will be around in stable/rocky and will 
be removed in the Stein

release (All milestones of this release)

What should users/operators do: Any Cinder E-series deployers are encouraged to 
get in touch with NetApp

via the community #openstack-netapp IRC channel on freenode or via the 
#OpenStack Slack channel

on http://netapp.io. We encourage migration to the LVM driver for continued use 
of E-series systems

in most cases via Cinder’s migrate facility [3].

[1] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
[2] 
https://review.openstack.org/#/c/580679/
[3] https://docs.openstack.org/admin-guide/blockstorage-volume-migration.html

Thanks,
Luiz Gavioli
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Zane Bitter

On 02/07/18 19:13, Jay Pipes wrote:
Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services 
aren't necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack.


Yes. I shall shout it from the highest mountains. [1]


Thanks. Appreciate it :)

[1] I live in Florida, though, which has no mountains. But, when I 
visit, say, North Carolina, I shall certainly shout it from their 
mountains.


That's where I live, so I'll keep an eye out for you if I hear shouting.

Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


I've said in the past (on Twitter, can't find the link right now, but 
it's out there somewhere) something to the effect of "at some point, 
someone just needs to come out and say that OpenStack is, at its core, 
Nova, Neutron, Keystone, Glance and Cinder".


https://twitter.com/jaypipes/status/875377520224460800 for anyone who 
was curious.


Interestingly, that and my equally off-the-cuff reply 
https://twitter.com/zerobanana/status/875559517731381249 are actually 
pretty close to the minimal descriptions of the two broad camps we were 
talking about in the technical vision etherpad. (Noting for the record 
that cdent disputes that views can be distilled into two camps.)


Perhaps this is what you were recollecting. I would use a different 
phrase nowadays to describe what I was thinking with the above.


I don't think I was recalling anything in particular that *you* had 
said. Complaining about the non-core projects (presumably on the logic 
that if we kicked them out of OpenStack all their developers would 
instead go to work on radically simplifying the remaining projects 
instead?) was a widespread popular pastime for at least 
roughly the 4 years from 2013-2016.


I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are 
a definitive lower level of an OpenStack deployment. They represent a 
set of required integrated services that supply the most basic 
infrastructure for datacenter resource management when deploying 
OpenStack."


Note the difference in wording. Instead of saying "OpenStack is X", I'm 
saying "These particular services represent a specific layer of an 
OpenStack deployment".


OK great. So this is wrong :) and I will attempt to explain why I think 
that in a second. But first I want to acknowledge what is attractive 
about this viewpoint (even to me). This is a genuinely useful 
observation that leads to a real insight.


The insight, I think, is the same one we all just agreed on in another 
part of the thread: OpenStack is the only open source project 
concentrating on the gap between a rack full of unconfigured equipment 
and somewhere that you could, say, install Kubernetes. We write the bit 
where the rubber meets the road, and if we don't get it done there's 
nobody else to do it! There's an almost infinite variety of different 
applications and they'll all need different parts of the higher layers, 
but ultimately they'll all need to be reified in a physical data center 
and when they do, we'll be there: that's the core of what we're building.


It's honestly only the tiniest of leaps from seeing that idea as 
attractive, useful, and genuinely insightful to seeing it as correct, 
and I don't really blame anybody who made that leap.


I'm going to gloss over the fact that we punted the actual process of 
setting up the data center to a bunch of what turned out to be 
vendor-specific installer projects that you suggest should be punted out 
of OpenStack altogether, because that isn't the biggest problem I have 
with this view.


Back in the '70s there was this idea about AI: even a 2 year old human 
can e.g. recognise images with a high degree of accuracy, but doing e.g. 
calculus is extremely hard in comparison and takes years of training. 
But computers can already do calculus! Ergo, we've solved the hardest 
part already and building the rest out of that will be trivial, AGI is 
just around the corner, &c. &c. (I believe I cribbed this explanation 
from an outdated memory of Marvin Minsky's 1982 paper "Why People Think 
Computers Can't" - specifically the section "Could a Computer Have 
Common Sense?" - so that's a better source if you actually want to learn 
something about AI.) The popularity of this idea arguably helped created 
the AI bubble, and the inevitable collision with the reality of its 
fundamental wrongness led to the AI Winter. Because in fact just because 
you can build logic out of many layers of heuristics (as human brains 
do), it absolutely does not follow that it's trivial to build other 
things that also require many layers of heuristics once you have some 
basic logic building blocks

[openstack-dev] [tripleo] What is the proper way to use NetConfigDataLookup?

2018-07-06 Thread Mark Hamzy
What is the proper way to use NetConfigDataLookup?  I tried the following:

(undercloud) [stack@oscloud5 ~]$ cat << '__EOF__' > 
~/templates/mapping-info.yaml
parameter_defaults:
  NetConfigDataLookup:
  control1:
nic1: '5c:f3:fc:36:dd:68'
nic2: '5c:f3:fc:36:dd:6c'
nic3: '6c:ae:8b:29:27:fa' # 9.114.219.34
nic4: '6c:ae:8b:29:27:fb' # 9.114.118.???
nic5: '6c:ae:8b:29:27:fc'
nic6: '6c:ae:8b:29:27:fd'
  compute1:
nic1: '6c:ae:8b:25:34:ea' # 9.114.219.44
nic2: '6c:ae:8b:25:34:eb'
nic3: '6c:ae:8b:25:34:ec' # 9.114.118.???
nic4: '6c:ae:8b:25:34:ed'
  compute2:
nic1: '00:0a:f7:73:3c:c0'
nic2: '00:0a:f7:73:3c:c1'
nic3: '00:0a:f7:73:3c:c2' # 9.114.118.156
nic4: '00:0a:f7:73:3c:c3' # 9.114.112.???
nic5: '00:0a:f7:73:73:f4'
nic6: '00:0a:f7:73:73:f5'
nic7: '00:0a:f7:73:73:f6' # 9.114.219.134
nic8: '00:0a:f7:73:73:f7'
__EOF__
(undercloud) [stack@oscloud5 ~]$ openstack overcloud deploy --templates -e 
~/templates/node-info.yaml -e ~/templates/mapping-info.yaml -e 
~/templates/overcloud_images.yaml -e 
~/templates/environments/network-environment.yaml -e 
~/templates/environments/network-isolation.yaml -e 
~/templates/environments/config-debug.yaml --disable-validations 
--ntp-server pool.ntp.org --control-scale 1 --compute-scale

But I did not see a /etc/os-net-config/mapping.yaml get created.

Also is this configuration used when the system boots IronicPythonAgent to 
provision the disk?

-- 
Mark

You must be the change you wish to see in the world. -- Mahatma Gandhi
Never let the future disturb you. You will meet it, if you have to, with 
the same weapons of reason which today arm you against the present. -- 
Marcus Aurelius

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-06 Thread Ben Nemec



On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to 
support multiple deployment tools in parallel (originally 
tripleo-image-elements and puppet).  Since I think it has become clear 
that it's impractical to maintain two different technologies to do 
essentially the same thing I'm not sure there's a need for it now.  It's 
also worth noting that kolla-kubernetes basically died because there 
wasn't enough people to maintain both deployment methods, so we're not 
the only ones who have found that to be true.  If/when we move to 
kubernetes I would anticipate it going like the initial containers work 
did - development for a couple of cycles, then a switch to the new thing 
and deprecation of the old thing, then removal of support for the old thing.


That being said, because of the fact that the service yamls are 
essentially an API for TripleO because they're referenced in user 
resource registries, I'm not sure it's worth the churn to move 
everything either.  I think that's going to be an issue either way 
though, it's just a question of the scope.  _Something_ is going to move 
around no matter how we reorganize so it's a problem that needs to be 
addressed anyway.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-06 Thread Luke Hinds
On Thu, Jul 5, 2018 at 6:17 PM, Doug Hellmann  wrote:

> Excerpts from Jim Rollenhagen's message of 2018-07-05 12:53:34 -0400:
> > On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E <
> > nishant.e.ku...@ericsson.com> wrote:
> >
> > > Hi,
> > >
> > >
> > >
> > > I have registered a blueprint for adding http security headers -
> > > https://blueprints.launchpad.net/cinder/+spec/http-security-headers
> > >
> > >
> > >
> > > Reason for introducing this change - I work for AT&T cloud project –
> > > Network Cloud (Earlier known as AT&T integrated Cloud). As part of
> working
> > > there we have introduced this change within all the services as kind
> of a
> > > downstream change but would like to see it a part of upstream
> community.
> > > While we did not face any major threats without this change but during
> our
> > > investigation process we found that if dealing with web services we
> should
> > > maximize the security as much as possible and came up with a list of
> HTTP
> > > security headers that we should include as part of the OpenStack
> services.
> > > I would like to introduce this change as part of cinder to start off
> and
> > > then propagate this to all the services.
> > >
> > >
> > >
> > > Some reference links which might give more insight into this:
> > >
> > >- https://www.owasp.org/index.php/OWASP_Secure_Headers_
> > >Project#tab=Headers
> > >- https://www.keycdn.com/blog/http-security-headers/
> > >- https://securityintelligence.com/an-introduction-to-http-
> > >response-headers-for-security/
> > >
> > > Please let me know if this looks good and whether it can be included as
> > > part of Cinder followed by other services. More details on how the
> > > implementation will be done is mentioned as part of the blueprint but
> any
> > > better ideas for implementation is welcomed too !!
> > >
> >
> > Wouldn't this be a job for the HTTP server in front of cinder (or
> whatever
> > service)? Especially "Strict-Transport-Security" as one shouldn't be
> > enabling that without ensuring a correct TLS config.
> >
> > Bonus points in that upstream wouldn't need any changes, and we won't
> need
> > to change every project. :)
> >
> > // jim
>
> Yes, this feels very much like something the deployment tools should
> do when they set up Apache or uWSGI or whatever service is in front
> of each API WSGI service.
>
> Doug
>
>
I agree, this should all be set within an installer, rather then the base
project itself. Horizon (or rather django) has directives to enable many of
the common security header fields, but rather than set these directly in
horizons local_settings, we patched the openstack puppet-horizon module.
Take for the following for example around X-Frame disabling:

https://github.com/openstack/puppet-horizon/blob/218c35ea7bc08dd88d936ab79b14e5ce2b94ea44/releasenotes/notes/disallow_iframe_embed-f0ffa1cabeca5b1e.yaml#L2

The same approach should be used elsewhere, with whatever the preferred
deployment tool is (puppet, chef, ansible etc).  That way if a decision is
made to roll out out TLS then can also toggle in certificate pinning etc in
the same tool flow.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Ben Nemec
Red Hat OpenStack is based on RDO.  It's not pretty far from it, it's 
very close.  It's basically productized RDO, and in the interest of 
everyone's sanity we try to keep the downstream patches to a minimum.


In general I would be careful trying to take the distro analogy too far 
though.  The release cycles of the Red Hat Linux distros are very 
different from that of the OpenStack distros.  RDO would be more akin to 
CentOS in terms of how closely related they are, but the relationship is 
inverted.  CentOS is taking the RHEL source (which is based on whatever 
the current Fedora release is when a new major RHEL version gets 
branched) and distributing packages based on it, while RHOS is taking 
the RDO bits and productizing them.  There's no point in having a 
CentOS-like distro that then repackages the RHOS source because you'd 
end up with essentially RDO again.  RDO and RHOS don't diverge the way 
Fedora and RHEL do after they are branched because they're on the same 
release cycle.


So essentially the flow with the Linux distros looks like:

Upstream->Fedora->RHEL->CentOS

Whereas the OpenStack distros are:

Upstream->RDO->RHOS

With RDO serving the purpose of both Fedora and CentOS.

As for TripleO, it's been integrated with RHOS/RDO since Kilo, and I 
believe it has been the recommended way to deploy in production since 
then as well.


-Ben

On 07/05/2018 03:17 PM, Fox, Kevin M wrote:
I use RDO in production. Its pretty far from RedHat OpenStack. though 
its been a while since I tried the TripleO part of RDO. Is it pretty 
well integrated now? Similar to RedHat OpenStack? or is it more Fedora 
like then CentOS like?


Thanks,
Kevin

*From:* Dmitry Tantsur [dtant...@redhat.com]
*Sent:* Thursday, July 05, 2018 11:17 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [tc] [all] TC Report 18-26




On Thu, Jul 5, 2018, 19:31 Fox, Kevin M > wrote:


We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying
ansible. But what I said depends on context. If your goal is to
deploy k8s/manage k8s then having to learn how to use k8s is not a
big ask. adding a different tool such as ansible is an extra
cognitive dependency. Deploying k8s doesn't need a general solution
to deploying generic base OS's. Just enough OS to deploy K8s and
then deploy everything on top in containers. Deploying a seed k8s
with minikube is pretty trivial. I'm not suggesting a solution here
to provide generic provisioning to every use case in the datacenter.
But enough to get a k8s based cluster up and self hosted enough
where you could launch other provisioning/management tools in that
same cluster, if you need that. It provides a solid base for the
datacenter on which you can easily add the services you need for
dealing with everything.

All of the microservices I mentioned can be wrapped up in a single
helm chart and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I
can't prove anything right now. So, take my advice with a grain of
salt. :)

Switching gears, you said why would users use lfs when they can use
a distro, so why use openstack without a distro. I'd say, today
unless you are paying a lot, there isn't really an equivalent distro
that isn't almost as much effort as lfs when you consider day2 ops.
To compare with Redhat again, we have a RHEL (redhat openstack), and
Rawhide (devstack) but no equivalent of CentOS. Though I think
TripleO has been making progress on this front...


It's RDO what you're looking for (equivalent of centos). TripleO is an 
installer project, not a distribution.



Anyway. This thread is I think 2 tangents away from the original
topic now. If folks are interested in continuing this discussion,
lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com ]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
 > I don't dispute trivial, but a self hosting k8s on bare metal is
not incredibly hard. In fact, it is easier then you might think. k8s
is a platform for deploying/managing services. Guess what you need
to provision bare metal? Just a few microservices. A dhcp service.
dhcpd in a daemonset works well. some pxe infrastructure. pixiecore
with a simple http backend works pretty well in practice. a serv

Re: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources

2018-07-06 Thread Jay S Bryant



On 7/6/2018 9:33 AM, Alan Bishop wrote:


On Fri, Jul 6, 2018 at 10:18 AM Amy Marrich > wrote:


Hey,

Forwarding to the Dev list as you may get a better response from
there.

Thanks,

Amy (spotz)

On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron
mailto:keynes_...@wistron.com>> wrote:

Hi

When making “cinder backup-create”

We found the process “cinder-backup” use 100% util of 1 CPU
core on an OpenStack Controller node.

It not just causes a bad backup performance, also make the
openstack-cinder-backup unstable.

Especially when we make several backup at the same time.

The Controller Node has 40 CPU cores.

Can we assign more CPU resources to cinder-backup ?


This has been addressed in [1], but it may not be in the release 
you're using.


[1] 
https://github.com/openstack/cinder/commit/373b52404151d80e83004a37d543f825846edea1




In addition to the change above we also have 
https://review.openstack.org/#/c/537003/ which should also help with the 
stability issues.  That has been backported as far as Pike.


The change for multiple processes is only in master for the Rocky 
release right now.


Jay


Alan

cid:image007.jpg@01D1747D.DB260110



*Keynes  Lee **李俊賢*

Direct:



+886-2-6612-1025

Mobile:



+886-9-1882-3787

Fax:



+886-2-6612-1991



E-Mail:



keynes_...@wistron.com 


*---*

*This email contains confidential or legally privileged
information and is for the sole use of its intended recipient. *

*Any unauthorized review, use, copying or distribution of this
email or the content of this email is strictly prohibited.*

*If you are not the intended recipient, you may reply to the
sender and should delete this e-mail immediately.*


*---*


___
Community mailing list
commun...@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/community


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources

2018-07-06 Thread Duncan Thomas
You can run many c-bak processes on one node, which will get fed round
robin, so you should see fairly linear speedup in the many backups case
until you run out of CPUs.

Parallelising a single backup was something I attempted, but python makes
it extremely difficult so there's no useful implementation I'm aware of.

On Fri, 6 Jul 2018, 3:18 pm Amy Marrich,  wrote:

> Hey,
>
> Forwarding to the Dev list as you may get a better response from there.
>
> Thanks,
>
> Amy (spotz)
>
> On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron <
> keynes_...@wistron.com> wrote:
>
>> Hi
>>
>>
>>
>> When making “cinder backup-create”
>>
>> We found the process “cinder-backup” use 100% util of 1 CPU core on an
>> OpenStack Controller node.
>>
>> It not just causes a bad backup performance, also make the
>> openstack-cinder-backup unstable.
>>
>> Especially when we make several backup at the same time.
>>
>>
>>
>> The Controller Node has 40 CPU cores.
>>
>> Can we assign more CPU resources to cinder-backup ?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> [image: cid:image007.jpg@01D1747D.DB260110]
>>
>> *Keynes  Lee**李* *俊* *賢*
>>
>> Direct:
>>
>> +886-2-6612-1025
>>
>> Mobile:
>>
>> +886-9-1882-3787
>>
>> Fax:
>>
>> +886-2-6612-1991
>>
>>
>>
>> E-Mail:
>>
>> keynes_...@wistron.com
>>
>>
>>
>>
>>
>>
>> *---*
>>
>> *This email contains confidential or legally privileged information and
>> is for the sole use of its intended recipient. *
>>
>> *Any unauthorized review, use, copying or distribution of this email or
>> the content of this email is strictly prohibited.*
>>
>> *If you are not the intended recipient, you may reply to the sender and
>> should delete this e-mail immediately.*
>>
>>
>> *---*
>>
>> ___
>> Community mailing list
>> commun...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources

2018-07-06 Thread Alan Bishop
On Fri, Jul 6, 2018 at 10:18 AM Amy Marrich  wrote:

> Hey,
>
> Forwarding to the Dev list as you may get a better response from there.
>
> Thanks,
>
> Amy (spotz)
>
> On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron <
> keynes_...@wistron.com> wrote:
>
>> Hi
>>
>>
>>
>> When making “cinder backup-create”
>>
>> We found the process “cinder-backup” use 100% util of 1 CPU core on an
>> OpenStack Controller node.
>>
>> It not just causes a bad backup performance, also make the
>> openstack-cinder-backup unstable.
>>
>> Especially when we make several backup at the same time.
>>
>>
>>
>> The Controller Node has 40 CPU cores.
>>
>> Can we assign more CPU resources to cinder-backup ?
>>
>
This has been addressed in [1], but it may not be in the release you're
using.

[1]
https://github.com/openstack/cinder/commit/373b52404151d80e83004a37d543f825846edea1

Alan


> [image: cid:image007.jpg@01D1747D.DB260110]
>>
>> *Keynes  Lee**李* *俊* *賢*
>>
>> Direct:
>>
>> +886-2-6612-1025
>>
>> Mobile:
>>
>> +886-9-1882-3787
>>
>> Fax:
>>
>> +886-2-6612-1991
>>
>>
>>
>> E-Mail:
>>
>> keynes_...@wistron.com
>>
>>
>>
>>
>>
>>
>> *---*
>>
>> *This email contains confidential or legally privileged information and
>> is for the sole use of its intended recipient. *
>>
>> *Any unauthorized review, use, copying or distribution of this email or
>> the content of this email is strictly prohibited.*
>>
>> *If you are not the intended recipient, you may reply to the sender and
>> should delete this e-mail immediately.*
>>
>>
>> *---*
>>
>> ___
>> Community mailing list
>> commun...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources

2018-07-06 Thread Amy Marrich
Hey,

Forwarding to the Dev list as you may get a better response from there.

Thanks,

Amy (spotz)

On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron <
keynes_...@wistron.com> wrote:

> Hi
>
>
>
> When making “cinder backup-create”
>
> We found the process “cinder-backup” use 100% util of 1 CPU core on an
> OpenStack Controller node.
>
> It not just causes a bad backup performance, also make the
> openstack-cinder-backup unstable.
>
> Especially when we make several backup at the same time.
>
>
>
> The Controller Node has 40 CPU cores.
>
> Can we assign more CPU resources to cinder-backup ?
>
>
>
>
>
>
>
>
>
>
>
>
>
> [image: cid:image007.jpg@01D1747D.DB260110]
>
> *Keynes  Lee**李* *俊* *賢*
>
> Direct:
>
> +886-2-6612-1025
>
> Mobile:
>
> +886-9-1882-3787
>
> Fax:
>
> +886-2-6612-1991
>
>
>
> E-Mail:
>
> keynes_...@wistron.com
>
>
>
>
>
>
> *---*
>
> *This email contains confidential or legally privileged information and is
> for the sole use of its intended recipient. *
>
> *Any unauthorized review, use, copying or distribution of this email or
> the content of this email is strictly prohibited.*
>
> *If you are not the intended recipient, you may reply to the sender and
> should delete this e-mail immediately.*
>
>
> *---*
>
> ___
> Community mailing list
> commun...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-27

2018-07-06 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-27.html

This is placement update 18-27, a weekly update of ongoing
development related to the [OpenStack](https://www.openstack.org/)
[placement
service](https://developer.openstack.org/api-ref/placement/). This
is a contract version.

# Most Important

In the past week or so we've found a two suites of bugs that are
holding up other work. One set is related to consumers and the
handling of consumer generations (linked in that theme, below). The
other is related to various ways in which managing parents of nested
providers is not correct. Those are:

* 
* 

The first is already fixed, the second was discovered as a result of
thinking about the first.

We also have some open questions about which of consumer id, project
id, and user id are definitely going to be a valid UUID and what
that means in relation to enforcement and our definition of "what's
a good uuid":

* 

As usual, this is more support for the fact that we need to be
doing increased manual testing to find where our automated tests
have gaps, and fill them.

On themes, we have several things rushing for attention before the
end of the cycle (reminder: Feature Freeze is the end of this
month). We need to get the various facets of consumers fixed up in a
way that we all agree is correct. We need to get the Reshaped
Providers implemented. And there's some hope (maybe vain?) that we
can get the report client and virt drivers talking in a more nested
and shared form.

# What's Changed

The microversion for nested allocation candidates has merged as
1.29.

The huge pile of osc-placement changes at

has merged. Yay!

# Bugs

* Placement related [bugs not yet in
  progress](https://goo.gl/TgiPXb): 16, -3 on last week.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 17, +7 on last
  week.

# Questions

* Will consumer id, project and user id always be a UUID? We've
  established for certain that user id will not, but things are
  less clear for the other two. This issue is compounded by the
  fact that these two strings are different but the same UUID:
  5eb033fd-c550-420e-a31c-3ec2703a403c,
  5eb033fdc550420ea31c3ec2703a403c (bug 1758057 mentioned above) but
  we treat them differently in our code.

# Main Themes

## Documentation

This is a section for reminding us to document all the fun stuff we
are enabling. Open areas include:

* Documenting optional placement database. A bug,
  [1778227](https://bugs.launchpad.net/nova/+bug/1778227) has been
  created to track this. This has started, for the install docs, but
  there are more places that need to be touched.

* "How to deploy / model shared disk. Seems fairly straight-forward,
  and we could even maybe create a multi-node ceph job that does
  this - wouldn't that be awesome?!?!", says an enthusiastic Matt
  Riedemann.

* The whens and wheres of re-shaping and VGPUs.

## Nested providers in allocation candidates

The main code of this has merged. What's left are dealing with
things like the parenting bugs mentioned above, and actually
reporting any nested providers and inventory so we can make use of
them.

## Consumer Generations

A fair bit of open bugs fixes and debate on this stuff.

* 
  No ability to update consumer's project and/or user external ID
* 
  Consumers never get deleted 
* 

  Add UUID validation for consumer_uuid
  (This one has some of the discussion about whether consumers are
  always going to be UUIDs)
* 
  move lookup of provider from _new_allocations()
* 
  return 404 when no consumer found in allocs

Note that once this is correct we'll still have work to do in the
report client to handle consumer generations before nova can do
anything with it.

## Reshape Provider Trees

This allows moving inventory and allocations that were on resource
provider A to resource provider B in an atomic fashion. The
blueprint topic is:

* 

There are WIPs for the HTTP parts and the resource tracker parts, on
that topic.

## Mirror Host Aggregates

This needs a command line tool:

* 

## Extraction

Extraction is mostly taking a back seat at the moment while we
find and fix bugs in existing features. We've also done quite a lot
of the preparatory work. The main things to be thinking about are:

* os-resource-classes
* infra and co-gating issues that are going to come up
* copying whatever nova-based test fixture we might like

# Other

24 entries last week. 20 now (this is a contract week, there's
plenty of new re

Re: [openstack-dev] [octavia] Some tips about amphora driver

2018-07-06 Thread Michael Johnson
Hi Jeff,
Thank you for your comments. I will reply on the story.

Michael

On Thu, Jul 5, 2018 at 8:02 PM Jeff Yang  wrote:
>
> Recently, my team plans to provider load balancing services with octavia.I 
> recorded some of the needs and suggestions of our team members.The following 
> suggestions about amphora may be very useful.
>
> [1] User can specify image and flavor for amphora.
> [2] Enable multi processes(version<1.8) or multi threads(version>=1.8) for 
> haproxy
> [3] Provider a script to check and clean up bad loadbalancer and amphora. 
> Moreover we alse need to clean up neutron and nova resources about these 
> loadblancer and amphora.
>
> The implementation of [1] and [2] depend on provider flavor framework. So 
> it's time to implement provider flavor framework.
> About [3], We can't delete loadbalancer by API if the loadbalancer's status 
> is PENDING_UPDATE or PENDING_CREATE. And we haven't api for delete amphora, 
> so if the status of this amphora is not active it will always exists. So the 
> script is necessary.
>
> https://storyboard.openstack.org/#!/story/2002896
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-7, July 9-13

2018-07-06 Thread Sean McGinnis
Welcome to our weekly release countdown.

Development Focus
-

Teams should be focused on implementing planned work for the cycle and bug
fixes.

General Information
---

The deadline for extra ATC's is on July 13. If there is someone that
contributes to your project in a way that is not reflected by the usual
metrics. Extra-ATCs can be added by submitting an update to the
reference/projects.yaml file in the openstack/governance repo.

As we get closer to the end of the cycle, we have deadlines coming up for
client and non-client libraries to ensure any dependency issues are worked out
and we have time to make any critical fixes before the final release
candidates. To this end, it is good practice to release libraries throughout
the cycle once they have accumulated any significant functional changes.

The following libraries appear to have some merged changes that have not been
release that could potentially impact consumers of the library. It would be
good to consider getting these released ahead of the deadline to make sure the
changes have some run time:

openstack/osc-placement
openstack/oslo.messaging
openstack/ovsdbapp
openstack/python-brick-cinderclient-ext
openstack/python-magnumclient
openstack/python-novaclient
openstack/python-qinlingclient
openstack/python-swiftclient
openstack/python-tripleoclient
openstack/sushy

Stein Release Schedule


As some of you may be aware, there is discussion underway about changing the
Summit and PTG events starting in 2019. With that in mind, we have a draft
schedule for the Stein release proposed to be able to see what it might look
like to adjust to the expected changes. Please take a look at the possible
schedule and let us know if you see any major conflicts due to holidays or
other events that we have not accounted for:


http://logs.openstack.org/94/575794/1/check/build-openstack-sphinx-docs/b522825/html/stein/schedule.html

Upcoming Deadlines & Dates
--

Final non-client library release deadline: July 19
Final client library release deadline: July 26
Rocky-3 Milestone: July 26

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-06 Thread Kristi Nikolla
On Thu, Jul 5, 2018 at 4:11 PM Monty Taylor  wrote:
>
> On 07/05/2018 01:55 PM, melanie witt wrote:
> > +openstack-dev@
> >
> > On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
> >>> But, I can not use nova command, endpoint nova have been redirected
> >>> from https to http. Here:http://prntscr.com/k2e8s6  (command: nova
> >>> –insecure service list)
> >> First of all, it seems that the nova client is hitting /v2.1 instead
> >> of /v2.1/ URI and this seems to be triggering the redirect.
> >>
> >> Since openstack CLI works, I presume it must be using the correct URL
> >> and hence it’s not getting redirected.
> >>
> >>> And this is error log: Unable to establish connection
> >>> tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.',
> >>> BadStatusLine("''",))
> >> Looks to me that nova-api does a redirect to an absolute URL. I
> >> suspect SSL is terminated on the HAProxy and nova-api itself is
> >> configured without SSL so it redirects to an http URL.
> >>
> >> In my opinion, nova would be more load-balancer friendly if it used a
> >> relative URI in the redirect but that’s outside of the scope of this
> >> question and since I don’t know the context behind choosing the
> >> absolute URL, I could be wrong on that.
> >
> > Thanks for mentioning this. We do have a bug open in python-novaclient
> > around a similar issue [1]. I've added comments based on this thread and
> > will consult with the API subteam to see if there's something we can do
> > about this in nova-api.
>
> A similar thing came up the other day related to keystone and version
> discovery. Version discovery documents tend to return full urls - even
> though relative urls would make public/internal API endpoints work
> better. (also, sometimes people don't configure things properly and the
> version discovery url winds up being incorrect)
>
> In shade/sdk - we actually construct a wholly-new discovery url based on
> the url used for the catalog and the url in the discovery document since
> we've learned that the version discovery urls are frequently broken.
>
> This is problematic because SOMETIMES people have public urls deployed
> as a sub-url and internal urls deployed on a port - so you have:
>
> Catalog:
> public: https://example.com/compute
> internal: https://compute.example.com:1234
>
> Version discovery:
> https://example.com/compute/v2.1
>
> When we go to combine the catalog url and the versioned url, if the user
> is hitting internal, we product
> https://compute.example.com:1234/compute/v2.1 - because we have no way
> of systemically knowing that /compute should also be stripped.
>
> VERY LONG WINDED WAY of saying 2 things:
>
> a) Relative URLs would be *way* friendlier (and incidentally are
> supported by keystoneauth, openstacksdk and shade - and are written up
> as being a thing people *should* support in the documents about API
> consumption)
>
> b) Can we get agreement that changing behavior to return or redirect to
> a relative URL would not be considered an api contract break? (it's
> possible the answer to this is 'no' - so it's a real question)

If the answer is 'no', can we find a process that gets us there? Or
are we doomed
by the inability to version the version document?

>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about filter the flavor

2018-07-06 Thread Kristi Nikolla
OSC here refers to OpenStackClient [0].

[0]. https://docs.openstack.org/python-openstackclient/latest/

On Fri, Jul 6, 2018 at 4:44 AM Rambo  wrote:
>
>  Does the "OSC“ meas the osc placement?
>
>
> -- Original --
> From:  "Matt Riedemann";
> Date:  Mon, Jul 2, 2018 10:36 PM
> To:  "OpenStack Developmen";
> Subject:  Re: [openstack-dev] [nova] about filter the flavor
>
> On 7/2/2018 2:43 AM, 李杰 wrote:
> > Oh,sorry,not this means,in my opinion,we could filter the flavor in
> > flavor list.such as the cli:openstack flavor list --property key:value.
>
> There is no support for natively filtering flavors by extra specs in the
> compute REST API so that would have to be added with a microversion (if
> we wanted to add that support). So it would require a nova spec, which
> would be reviewed for consideration at the earliest in the Stein
> release. OSC could do client-side filtering if it wanted.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about filter the flavor

2018-07-06 Thread Rambo
Does the "OSC“ meas the osc placement?
 
 
-- Original --
From:  "Matt Riedemann";
Date:  Mon, Jul 2, 2018 10:36 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [nova] about filter the flavor

 
On 7/2/2018 2:43 AM, 李杰 wrote:
> Oh,sorry,not this means,in my opinion,we could filter the flavor in 
> flavor list.such as the cli:openstack flavor list --property key:value.

There is no support for natively filtering flavors by extra specs in the 
compute REST API so that would have to be added with a microversion (if 
we wanted to add that support). So it would require a nova spec, which 
would be reviewed for consideration at the earliest in the Stein 
release. OSC could do client-side filtering if it wanted.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-06 Thread Cédric Jeanneret

[snip]

> 
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
> 
> Instead of:
> 
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
> 
> We'd have:
> 
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml

I'd also go for that one - it would be clearer and easier to search when
one wants to see how the service is configured, displaying all implem
for given service.
The current tree is a bit unusual.

> 
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)
> 
> Personally, such an organization is something I'm more used to. It
> feels more similar to how most would expect a puppet module or ansible
> role to be organized, where you have the abstraction (service
> configuration) at a higher directory level than specific
> implementations.
> 
> It would also lend itself more easily to adding implementations only
> for specific services, and address the question of if a new top level
> implementation directory needs to be created. For example, adding a
> services/nova/nova-api-chef.yaml seems a lot less contentious than
> adding a top level chef/services/nova-api.yaml.

True. Easier to add new deployment ways, and probably easier to search.

> 
> It'd also be nice if we had a way to mark the default within a given
> service's directory. Perhaps services/nova/nova-api-default.yaml,
> which would be a new template that just consumes the default? Or
> perhaps a symlink, although it was pointed out symlinks don't work in
> swift containers. Still, that could possibly be addressed in our plan
> upload workflows. Then the resource-registry would point at
> nova-api-default.yaml. One could easily tell which is the default
> without having to cross reference with the resource-registry.

+42 for a way to get the default implem - a template that just consume
the right one should be enough and self-explanatory.
Having a tree based on services instead of implem will allow that in an
easy way.

> 
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari] Introspective Instance Monitoring through QEMU Guest Agent

2018-07-06 Thread Tushar Patil
Hi Louie,

I will check the updated patch and reply before next IRC meeting.

Thank you for your patience.

Regards,
Tushar Patil

On Fri, Jul 6, 2018 at 11:43 AM Kwan, Louie 
wrote:

> Thanks Tushar Patil for the +1 for 547118.
>
> In regards of the following review:
> https://review.openstack.org/#/c/534958/
>
> Any more comment?
>
> Thanks.
> Louie
> 
> From: Tushar Patil (Code Review) [rev...@openstack.org]
> Sent: Tuesday, July 03, 2018 8:48 PM
> To: Kwan, Louie
> Cc: Tim Bell; zhangyanying; Waines, Greg; Li Yingjun; wangqiang-bj; Tushar
> Patil; Ken Young; NTT system-fault-ci masakari-integration-ci; wangqiang;
> Abhishek Kekane; takahara.kengo; Rikimaru Honjo; Adam Spiers; Sampath
> Priyankara (samP); Dinesh Bhor
> Subject: Change in openstack/masakari[master]: Introspective Instance
> Monitoring through QEMU Guest Agent
>
> Tushar Patil has posted comments on this change. (
> https://review.openstack.org/547118 )
>
> Change subject: Introspective Instance Monitoring through QEMU Guest Agent
> ..
>
>
> Patch Set 3: Workflow+1
>
> --
> To view, visit https://review.openstack.org/547118
> To unsubscribe, visit https://review.openstack.org/settings
>
> Gerrit-MessageType: comment
> Gerrit-Change-Id: I9efc6afc8d476003d3aa7fee8c31bcaa65438674
> Gerrit-PatchSet: 3
> Gerrit-Project: openstack/masakari
> Gerrit-Branch: master
> Gerrit-Owner: Louie Kwan 
> Gerrit-Reviewer: Abhishek Kekane 
> Gerrit-Reviewer: Adam Spiers 
> Gerrit-Reviewer: Dinesh Bhor 
> Gerrit-Reviewer: Greg Waines 
> Gerrit-Reviewer: Hieu LE 
> Gerrit-Reviewer: Ken Young 
> Gerrit-Reviewer: Li Yingjun 
> Gerrit-Reviewer: Louie Kwan 
> Gerrit-Reviewer: NTT system-fault-ci masakari-integration-ci <
> masakari.integration.t...@gmail.com>
> Gerrit-Reviewer: Rikimaru Honjo 
> Gerrit-Reviewer: Sampath Priyankara (samP) 
> Gerrit-Reviewer: Tim Bell 
> Gerrit-Reviewer: Tushar Patil 
> Gerrit-Reviewer: Tushar Patil 
> Gerrit-Reviewer: Zuul
> Gerrit-Reviewer: takahara.kengo 
> Gerrit-Reviewer: wangqiang 
> Gerrit-Reviewer: wangqiang-bj 
> Gerrit-Reviewer: zhangyanying 
> Gerrit-HasComments: No
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev