On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
most of the work. The others are just
On 08/07/2018 10:57 AM, Cody wrote:
Hi everyone,
I intentionally triggered an error by launching more instances than it
is allowed by the 'cpu_allocation_ratio' set on a compute node. When it
comes to logs, the only place contained a clue to explain the launch
failure was in the
On 08/04/2018 07:35 PM, Michael Glasgow wrote:
On 8/2/2018 7:27 PM, Jay Pipes wrote:
It's not an exception. It's normal course of events. NoValidHosts
means there were no compute nodes that met the requested resource
amounts.
To clarify, I didn't mean a python exception.
Neither did I. I
On 08/02/2018 06:18 PM, Michael Glasgow wrote:
On 08/02/18 15:04, Chris Friesen wrote:
On 08/02/2018 01:04 PM, melanie witt wrote:
The problem is an infamous one, which is, your users are trying to boot
instances and they get "No Valid Host" and an instance in ERROR
state. They contact
On 08/02/2018 01:40 PM, Eric Fried wrote:
Jay et al-
And what I'm referring to is doing a single query per "related
resource/trait placement request group" -- which is pretty much what
we're heading towards anyway.
If we had a request for:
GET /allocation_candidates?
resources0=VCPU:1&
On 08/02/2018 01:12 AM, Alex Xu wrote:
2018-08-02 4:09 GMT+08:00 Jay Pipes <mailto:jaypi...@gmail.com>>:
On 08/01/2018 02:02 PM, Chris Friesen wrote:
On 08/01/2018 11:32 AM, melanie witt wrote:
I think it's definitely a signific
On 08/01/2018 02:02 PM, Chris Friesen wrote:
On 08/01/2018 11:32 AM, melanie witt wrote:
I think it's definitely a significant issue that troubleshooting "No
allocation
candidates returned" from placement is so difficult. However, it's not
straightforward to log detail in placement when the
ack. will review shortly. thanks, Chris.
On 07/30/2018 02:20 PM, Chris Dent wrote:
On Mon, 30 Jul 2018, Jay Pipes wrote:
On 07/26/2018 12:15 PM, Chris Dent wrote:
The `in_tree` calls happen from the report client method
`_get_providers_in_tree` which is called by
`_ensure_resource_provider
On 07/26/2018 12:15 PM, Chris Dent wrote:
The `in_tree` calls happen from the report client method
`_get_providers_in_tree` which is called by
`_ensure_resource_provider` which can be called from multiple
places, but in this case is being called both times from
On 07/27/2018 03:21 PM, Matt Riedemann wrote:
On 7/27/2018 2:14 PM, Matt Riedemann wrote:
From checking the history and review discussion on [3], it seems
that it was like that from staring. key_pair quota is being counted
when actually creating the keypair but it is not shown in API
On 07/18/2018 12:42 AM, Ian Wienand wrote:
The ideal is that a (say) Neutron dev gets a clear traceback from a
standard Python error in their change and happily fixes it. The
reality is probably more like this developer gets a tempest
failure due to nova failing to boot a cirros image, stemming
On 07/17/2018 03:36 AM, Neil Jerram wrote:
Can someone help me with how to look up a project name (aka tenant name)
for a known project/tenant ID, from code (specifically a mechanism
driver) running in the Neutron server?
I believe that means I need to make a GET REST call as here:
On 07/16/2018 10:30 AM, Toni Mueller wrote:
Hi Jay,
On Fri, Jul 06, 2018 at 12:46:04PM -0400, Jay Pipes wrote:
There is no current way to say "On this dual-Xeon compute node, put all
workloads that don't care about dedicated CPUs on this socket and all
workloads that DO care about dedi
On 07/16/2018 10:15 AM, arkady.kanev...@dell.com wrote:
Is this for ephemeral storage handling?
For both ephemeral as well as root disk.
In other words, just act like Cinder isn't there and attach a big local
root disk to the instance.
Best,
-jay
-Original Message-
From: Jay
Hi all,
Here's a testing and documentation bug that would be great for newcomers
to the placement project:
https://bugs.launchpad.net/nova/+bug/1781439
Come find us on #openstack-placement on Freenode IRC to chat about it if
you're interested!
Best,
-jay
On 07/16/2018 09:32 AM, Sean McGinnis wrote:
The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.
^^ yes, this.
-jay
__
OpenStack Development Mailing List (not for
This is placement update 18-28, a weekly update of ongoing development
related to the [OpenStack](https://www.openstack.org/) [placement
service](https://developer.openstack.org/api-ref/placement/).
This week I'm trying to fill Chris' esteemable shoes while he's away.
# Most Important
##
DB work is now pushed for the single transaction reshape() function:
https://review.openstack.org/#/c/582383
Note that in working on that, I uncovered a bug in
AllocationList.delete_all() which needed to first be fixed:
https://bugs.launchpad.net/nova/+bug/1781430
A fix has been pushed
Let's just get the darn thing done in Rocky. I will have the DB work up
for review today.
-jay
On 07/12/2018 10:45 AM, Matt Riedemann wrote:
Continuing the discussion from the nova meeting today [1], I'm trying to
figure out what the risk / benefit / contingency is if we don't get the
On 07/06/2018 10:09 AM, Chris Dent wrote:
# Questions
* Will consumer id, project and user id always be a UUID? We've
established for certain that user id will not, but things are
less clear for the other two. This issue is compounded by the
fact that these two strings are different
On 07/09/2018 02:52 PM, Chris Dent wrote:
On Fri, 6 Jul 2018, Chris Dent wrote:
This is placement update 18-27, a weekly update of ongoing
development related to the [OpenStack](https://www.openstack.org/)
[placement
service](https://developer.openstack.org/api-ref/placement/). This
is a
On 07/06/2018 12:58 PM, Zane Bitter wrote:
On 02/07/18 19:13, Jay Pipes wrote:
Nova's primary competition is:
* Stand-alone Ironic
* oVirt and stand-alone virsh callers
* Parts of VMWare vCenter [3]
* MaaS in some respects
Do you see KubeVirt or Kata or Virtlet or RancherVM ending up
Hi Tony,
The short answer is that you cannot do that today. Today, each Nova
compute node is either "all in" for NUMA and CPU pinning or it's not.
This means that for resource-constrained environments like "The Edge!",
there are not very good ways to finely divide up a compute node and make
o I
don't really see the correlation here. That said, I'm 100% against a
monolithic application approach, as I've mentioned before.
Best,
-jay
Thanks,
Kevin
From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 4:13 PM
To: openstack-dev@
Thanks so much for your contributions to our ecosystem, Brianna! I'm sad
to see you go! :(
Best,
-jay
On 07/03/2018 03:13 PM, Poulos, Brianna L. wrote:
All,
After over five years of contributing security features to OpenStack,
the JHUAPL team is wrapping up our involvement with OpenStack.
On 07/02/2018 03:31 PM, Zane Bitter wrote:
On 28/06/18 15:09, Fox, Kevin M wrote:
* made the barrier to testing/development as low as 'curl
http://..minikube; minikube start' (this spurs adoption and
contribution)
That's not so different from devstack though.
* not having large
On 07/03/2018 08:47 AM, Doug Hellmann wrote:
If you have a scaling issue that may be solved by eventlet, that's
one thing, but please don't adopt eventlet just because a lot of
other projects have. We've tried several times to minimize our
reliance on eventlet because new releases tend to
On 06/27/2018 07:23 PM, Zane Bitter wrote:
On 27/06/18 07:55, Jay Pipes wrote:
Above, I was saying that the scope of the *OpenStack* community is
already too broad (IMHO). An example of projects that have made the
*OpenStack* community too broad are purpose-built telco applications
like
On 07/02/2018 03:12 PM, Fox, Kevin M wrote:
I think a lot of the pushback around not adding more common/required services
is the extra load it puts on ops though. hence these:
* Consider abolishing the project walls.
* simplify the architecture for ops
IMO, those need to change to break
On 07/02/2018 09:45 AM, Houssam ElBouanani wrote:
Hi,
I have recently finished installing a minimal OpenStack Queens
environment for a school project, and was asked whether it is possible
to deploy an additional compute node on bare metal, aka without an
underlying operating system, in order
On 06/28/2018 11:18 AM, Stephen Finucane wrote:
Just a quick heads up that an upcoming change to nova's 'tox.ini' will
change the behaviour of multiple environments slightly.
https://review.openstack.org/#/c/534382/9
With this change applied, tox will start sharing environment
directories
On 06/27/2018 12:20 PM, Matt Riedemann wrote:
On 6/27/2018 10:13 AM, Jay Pipes wrote:
I'm -2'd the patch in question because of these concerns about
crossing the line between administrative and guest/virtual domains. It
may seem like a very trivial patch, but from what I can tell, it would
On 06/25/2018 05:28 PM, Mohammed Naser wrote:
Hi everyone:
While working with the OpenStack infrastructure team, we noticed that
we were having some intermittent issues where we wanted to identify a
theory if all VMs with this issue were landing on the same hypervisor.
However, there seems to
WARNING:
Danger, Will Robinson! Strong opinions ahead!
On 06/26/2018 10:00 PM, Zane Bitter wrote:
On 26/06/18 09:12, Jay Pipes wrote:
Is (one of) the problem(s) with our community that we have too small
of a scope/footprint? No. Not in the slightest.
Incidentally, this is an interesting
On 06/26/2018 08:41 AM, Chris Dent wrote:
Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
the TC's role as listener, mediator, and influencer lacks
definition.
Zane wrote up a blog post explaining the various ways in which the
OpenStack Foundation is
On 06/18/2018 10:16 AM, Artom Lifshitz wrote:
Hey all,
For Rocky I'm trying to get live migration to work properly for
instances that have a NUMA topology [1].
A question that came up on one of patches [2] is how to handle
resources claims on the destination, or indeed whether to handle that
+openstack-dev since I believe this is an issue with the Heat source code.
On 06/18/2018 11:19 AM, Spyros Trigazis wrote:
Hello list,
I'm hitting quite easily this [1] exception with heat. The db server is
configured to have 1000
max_connnections and 1000 max_user_connections and in the
+openstack-dev since I believe this is an issue with the Heat source code.
On 06/18/2018 11:19 AM, Spyros Trigazis wrote:
Hello list,
I'm hitting quite easily this [1] exception with heat. The db server is
configured to have 1000
max_connnections and 1000 max_user_connections and in the
On 06/13/2018 05:33 PM, Matt Riedemann wrote:
On 6/13/2018 3:33 PM, melanie witt wrote:
We've been experimenting with a new process this cycle, Review Runways
[1] and we're about at the middle of the cycle now as we had the r-2
milestone last week June 7.
I wanted to start a thread and
On 06/13/2018 10:18 AM, Blair Bethwaite wrote:
Hi Jay,
Ha, I'm sure there's some wisdom hidden behind the trolling here?
I wasn't trolling at all. I was trying to be funny. Attempt failed I
guess :)
Best,
-jay
___
OpenStack-operators mailing
On 06/13/2018 09:58 AM, Blair Bethwaite wrote:
Hi all,
Wondering if anyone can share experience with architecting Nova KVM
boxes for large capacity high-performance storage? We have some
particular use-cases that want both high-IOPs and large capacity local
storage.
In the past we have
On 06/07/2018 01:56 PM, melanie witt wrote:
Hello Stackers,
Recently, we've received interest about increasing the maximum number of
allowed volumes to attach to a single instance > 26. The limit of 26 is
because of a historical limitation in libvirt (if I remember correctly)
and is no
On 06/07/2018 01:56 PM, melanie witt wrote:
Hello Stackers,
Recently, we've received interest about increasing the maximum number of
allowed volumes to attach to a single instance > 26. The limit of 26 is
because of a historical limitation in libvirt (if I remember correctly)
and is no
Sorry for delay in responding on this. Comments inline.
On 05/29/2018 07:33 PM, Nadathur, Sundar wrote:
Hi all,
The Cyborg/Nova scheduling spec [1] details what traits will be
applied to the resource providers that represent devices like GPUs. Some
of the traits referred to vendor names.
On 06/06/2018 10:02 AM, Matt Riedemann wrote:
On 6/6/2018 8:24 AM, Jay Pipes wrote:
On 06/06/2018 09:10 AM, Artom Lifshitz wrote:
I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used
On 06/06/2018 09:10 AM, Artom Lifshitz wrote:
I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used in a specific way. That should never be the
case. I would think re-using the already
On 06/06/2018 07:46 AM, Matthew Booth wrote:
TL;DR I think we need to entirely disable swap volume for multiattach
volumes, and this will be an api breaking change with no immediate
workaround.
I was looking through tempest and came across
On 06/05/2018 08:50 AM, Stephen Finucane wrote:
I thought nested resource providers were already supported by placement?
To the best of my knowledge, what is /not/ supported is virt drivers
using these to report NUMA topologies but I doubt that affects you. The
placement guys will need to
On 06/04/2018 05:02 PM, Doug Hellmann wrote:
The most significant point of interest to the contributor
community from this section of the meeting was the apparently
overwhelming interest from companies employing contributors, as
well as 2/3 of the contributors to recent releases who responded
to
On 06/01/2018 03:02 PM, Dan Smith wrote:
Dan, you are leaving out the parts of my response where I am agreeing
with you and saying that your "Option #2" is probably the things we
should go with.
No, what you said was:
I would vote for Option #2 if it comes down to it.
Implying (to me at
On 05/31/2018 02:26 PM, Eric Fried wrote:
1. Make everything perform the pivot on compute node start (which can be
re-used by a CLI tool for the offline case)
2. Make everything default to non-nested inventory at first, and provide
a way to migrate a compute node and its instances one at
Dan, you are leaving out the parts of my response where I am agreeing
with you and saying that your "Option #2" is probably the things we
should go with.
-jay
On 06/01/2018 12:22 PM, Dan Smith wrote:
So, you're saying the normal process is to try upgrading the Linux
kernel and associated
On 05/31/2018 01:09 PM, Dan Smith wrote:
My feeling is that we should not attempt to "migrate" any allocations
or inventories between root or child providers within a compute node,
period.
While I agree this is the simplest approach, it does put a lot of
responsibility on the operators to do
On 05/31/2018 05:10 AM, Sylvain Bauza wrote:
After considering the whole approach, discussing with a couple of folks
over IRC, here is what I feel the best approach for a seamless upgrade :
- VGPU inventory will be kept on root RP (for the first type) in
Queens so that a compute service
On 05/30/2018 07:06 AM, Balázs Gibizer wrote:
The nova-manage is another possible way similar to my idea #c) but there
I imagined the logic in placement-manage instead of nova-manage.
Please note there is no placement-manage CLI tool.
Best,
-jay
On 05/29/2018 09:12 AM, Sylvain Bauza wrote:
We could keep the old inventory in the root RP for the previous vGPU
type already supported in Queens and just add other inventories for
other vGPU types now supported. That looks possibly the simpliest option
as the virt driver knows that.
What
On 05/29/2018 01:06 PM, Matt Riedemann wrote:
I'm wondering if the RequestSpec.project_id is null? Like, I wonder if
you're hitting this bug:
https://bugs.launchpad.net/nova/+bug/1739318
Although if this is a clean Ocata environment with new instances, you
shouldn't have that problem.
The hosts you are attempting to migrate *to* do not have the
filter_tenant_id property set to the same tenant ID as the compute host
2 that originally hosted the instance.
That is why you see this in the scheduler logs when evaluating the
fitness of compute host 1 and compute host 3:
"fails
On 05/23/2018 12:49 PM, Colleen Murphy wrote:
On Tue, May 22, 2018, at 10:57 PM, Jay Pipes wrote:
Are any of the distributions of OpenStack listed at
https://www.openstack.org/marketplace/distros/ hosted on openstack.org
infrastructure? No. And I think that is completely appropriate.
Hang
Warning: strong opinions ahead.
On 05/22/2018 02:54 PM, Dean Troyer wrote:
Developers will need to re-create a repo locally in
order to work or test the code and create reviews (there are more git
challenges here). It would be challenging to do functional testing on
the rest of STX in CI
On 05/16/2018 08:18 PM, David G. Bingham wrote:
YoNova Gurus :-),
We here at GoDaddy are getting hot and heavy into Cells V2 these days
and would like to propose an enhancement or maybe see if something like
this is already in the works.
Need:
To be able to “synchronize” cells from a
On 05/19/2018 03:19 PM, Blair Bethwaite wrote:
Relatively Cyborg-naive question here...
I thought Cyborg was going to support a hot-plug model. So I certainly
hope it is not the expectation that accelerators will be encoded into
Nova flavors? That will severely limit its usefulness.
Hi Blair!
On 05/18/2018 07:58 AM, Nadathur, Sundar wrote:
Agreed. Not sure how other projects handle it, but here's the situation
for Cyborg. A request may get scheduled on a compute node with no
intervention by Cyborg. So, the earliest check that can be made today is
in the selected compute node. A
On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
Hi,
The Cyborg quota spec [1] proposes to implement a quota (maximum
usage) for accelerators on a per-project basis, to prevent one project
(tenant) from over-using some resources and starving other tenants.
There are separate resource
On 05/11/2018 12:21 PM, Zane Bitter wrote:
On 11/05/18 11:46, Jay Pipes wrote:
On 05/10/2018 08:12 PM, Zane Bitter wrote:
On 10/05/18 16:45, Matt Riedemann wrote:
On 5/10/2018 3:38 PM, Zane Bitter wrote:
How can we avoid (or get out of) the local maximum trap and ensure
that OpenStack
On 05/10/2018 08:12 PM, Zane Bitter wrote:
On 10/05/18 16:45, Matt Riedemann wrote:
On 5/10/2018 3:38 PM, Zane Bitter wrote:
How can we avoid (or get out of) the local maximum trap and ensure
that OpenStack will meet the needs of all the users we want to serve,
not just those whose needs are
Hi Stackers,
Here's a small bug that would be ideal for a new contributor to pick up:
https://bugs.launchpad.net/nova/+bug/1770636
Come find us on #openstack-placement on Freenode if you'd like to pick
it up and run with it.
Best,
-jay
On 05/07/2018 05:55 AM, 倪蔚辰 wrote:
Hi, all
I would like to propose a blueprint (not proposed yet), which is related
to openstack nova. I hope to have some comments by explaining my idea
through this e-mail. Please contact me if anyone has any comment.
Background
Under current OpenStack,
On 05/03/2018 12:57 PM, Ed Leafe wrote:
On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote:
• We should get a common consensus before all projects start to implement it.
This is going to be raised during the API SIG weekly meeting later this week.
API developers (at
On 05/02/2018 10:07 AM, Matt Riedemann wrote:
On 5/1/2018 5:26 PM, Arvind N wrote:
In cases of rebuilding of an instance using a different image where
the image traits have changed between the original launch and the
rebuild, is it reasonable to ask to just re-launch a new instance with
the
On 05/02/2018 04:39 PM, Torin Woltjer wrote:
> There is no HA behaviour for compute nodes.
>
> You are referring to HA of workloads running on compute nodes, not HA of
> compute nodes themselves.
It was a mistake for me to say HA when referring to compute and
instances. Really I want to
On 05/02/2018 02:43 PM, Torin Woltjer wrote:
I am working on setting up Openstack for HA and one of the last orders of
business is getting HA behavior out of the compute nodes.
There is no HA behaviour for compute nodes.
Is there a project that will automatically evacuate instances from a
Mathieu,
How do you handle issues where compute nodes are associated with
multiple aggregates and both aggregates have different values for a
particular filter key?
Is that a human-based validation process to ensure you don't have that
situation?
Best,
-jay
On 04/30/2018 12:41 PM,
On 04/30/2018 09:18 AM, Mikhail Medvedev wrote:
On Sun, Apr 29, 2018 at 4:29 PM, Ed Leafe wrote:
Another data point that might be illuminating is: how many sites use a custom
(i.e., not in-tree) filter or weigher? One of the original design tenets of the
scheduler was that
On 04/24/2018 12:04 PM, Fox, Kevin M wrote:
Could the major components, nova-api, neutron-server, glance-apiserver, etc be
built in a way to have 1 process for all of them, and combine the upgrade steps
such that there is also, one db-sync for the entire constellation?
So, basically the
On 04/23/2018 05:51 PM, Arvind N wrote:
Thanks for the detailed options Matt/eric/jay.
Just few of my thoughts,
For #1, we can make the explanation very clear that we rejected the
request because the original traits specified in the original image and
the new traits specified in the new
On 04/23/2018 03:48 PM, Matt Riedemann wrote:
We seem to be at a bit of an impasse in this spec amendment [1] so I
want to try and summarize the alternative solutions as I see them.
The overall goal of the blueprint is to allow defining traits via image
properties, like flavor extra specs.
On 04/19/2018 12:27 PM, Matt Riedemann wrote:
On 4/19/2018 11:06 AM, Matthew Booth wrote:
I'm ambivalent, tbh, but I think it's better to pick one. I thought
we'd picked 'evacuate' based on the TODOs from Matt R:
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985
Привет, Андрей! Comments inline...
On 04/19/2018 10:27 AM, Andrey Volkov wrote:
Hello,
From my understanding, we have a race between the scheduling
process and host weight update.
I made a simple experiment. On the 50 fake host environment
it was asked to boot 40 VMs those should be placed 1
On 04/19/2018 09:15 AM, Matthew Booth wrote:
We've had inconsistent naming of recreate/evacuate in Nova for a long
time, and it will persist in a couple of places for a while more.
However, I've proposed the following to rename 'recreate' to
'evacuate' everywhere with no rpc/api impact here:
ory to
everyone. And we'll delay, yet again, getting functionality into this
release that serves 90% of use cases because we are obsessing over the
0.01% of use cases that may pop up later.
Best,
-jay
One other thing inline below, not related to the immediate subject.
On 04/18/2018 12:40 PM, Jay Pi
On 04/18/2018 11:58 AM, Matt Riedemann wrote:
On 4/18/2018 9:06 AM, Jay Pipes wrote:
"By default, should resources/traits submitted in different numbered
request groups be supplied by separate resource providers?"
Without knowing all of the hairy use cases, I'm trying to channel
On 04/18/2018 01:14 PM, Matt Riedemann wrote:
On 4/18/2018 12:09 PM, Chris Friesen wrote:
If this happens, is it clear to the end-user that the reason the boot
failed is that the cloud doesn't support trusted cert IDs for
boot-from-vol? If so, then I think that's totally fine.
If you're
On 04/18/2018 12:41 PM, Matt Riedemann wrote:
There is a compute REST API change proposed [1] which will allow users
to pass trusted certificate IDs to be used with validation of images
when creating or rebuilding a server. The trusted cert IDs are based on
certificates stored in some key
queryparam to force
separation; or we're split by default and use a queryparam to allow the
unrestricted behavior.
Otherwise I agree with everything Jay said.
-efried
On 04/18/2018 09:06 AM, Jay Pipes wrote:
Stackers,
Eric Fried and I are currently at an impasse regarding a decision that
wil
Stackers,
Eric Fried and I are currently at an impasse regarding a decision that
will have far-reaching (and end-user facing) impacts to the placement
API and how nova interacts with the placement service from the nova
scheduler.
We need to make a decision regarding the following question:
On 04/16/2018 09:20 PM, melanie witt wrote:
I propose that we remove the z/VM driver blueprint from the runway at
this time and place it back into the queue while work on the driver
continues. At a minimum, we need to see z/VM CI running with
[validation]run_validation = True in tempest.conf
On 04/16/2018 06:23 PM, Eric Fried wrote:
I still don't see a use in returning the root providers in the
allocation requests -- since there is nothing consuming resources from
those providers.
And we already return the root_provider_uuid for all providers involved
in allocation requests within
than that, I don't see a reason to change the response from
GET /allocation_candidates at this time.
Best,
-jay
On 04/16/2018 01:58 PM, Jay Pipes wrote:
Sorry it took so long to respond. Comments inline.
On 03/30/2018 08:34 PM, Eric Fried wrote:
Folks who care about placement (but especially
Sorry it took so long to respond. Comments inline.
On 03/30/2018 08:34 PM, Eric Fried wrote:
Folks who care about placement (but especially Jay and Tetsuro)-
I was reviewing [1] and was at first very unsatisfied that we were not
returning the anchor providers in the results. But as I started
On 04/13/2018 09:57 AM, Chris Dent wrote:
# Questions
* Is anyone already on the hook to implement the multiple member_of
support described by this spec ammendment:
https://review.openstack.org/#/c/555413/ ?
I got this. Should have code up today for it.
Best,
-jay
Hi Li, please do remember to use a [cyborg] topic marker in your subject
line. (I've added one). Comments inline.
On 04/12/2018 11:08 PM, Li Liu wrote:
Hi Team,
While wrapping up spec for FPGA programmability, I think we still miss
the reconfigurability part of Accelerators
For instance,
Thanks, as always, for the excellent summary emails, Chris. Comments inline.
On 04/06/2018 01:54 PM, Chris Dent wrote:
This is "contract" style update. New stuff will not be added to the
lists.
# Most Important
There doesn't appear to be anything new with regard to most
important. That which
On 04/03/2018 11:51 AM, Michael Bayer wrote:
On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes <jaypi...@gmail.com> wrote:
On 04/03/2018 11:07 AM, Michael Bayer wrote:
Yes.
b. oslo.db script to run generically, yes or no?
No. Just have Triple-O install galera_innoptimizer and run it in
On 04/03/2018 11:07 AM, Michael Bayer wrote:
The MySQL / MariaDB variants we use nowadays default to
innodb_file_per_table=ON and we also set this flag to ON in installer
tools like TripleO. The reason we like file per table is so that
we don't grow an enormous ibdata file that can't be
Stackers,
Today, a few of us had a chat to discuss changes to the Placement REST
API [1] that will allow multiple clients to safely update a single
consumer's set of resource allocations. This email is to summarize the
decisions coming out of that chat.
Note that Ed is currently updating
On 04/03/2018 06:48 AM, Chris Dent wrote:
On Mon, 2 Apr 2018, Alex Schultz wrote:
So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process
On 03/31/2018 11:57 PM, Matthew Thode wrote:
The requirements project had a good run, but things seem to be winding
down. We only break openstack a couple times a cycle now, and that's
just not acceptable. The graph must go up and to the right. So, it's
time for the requirements project to
On 03/28/2018 07:03 PM, Nadathur, Sundar wrote:
Thanks, Eric. Looks like there are no good solutions even as candidates,
but only options with varying levels of unacceptability. It is funny
that that the option that is considered the least unacceptable is to let
the problem happen and then
On 03/28/2018 03:35 PM, Matt Riedemann wrote:
On 3/27/2018 10:37 AM, Jay Pipes wrote:
If we want to actually fix the issue once and for all, we need to make
availability zones a real thing that has a permanent identifier (UUID)
and store that permanent identifier in the instance
101 - 200 of 2014 matches
Mail list logo