On 11/19/2018 10:25 AM, Yedhu Sastri wrote:
Hello All,
I have some use-cases which I want to test in PowerPC
architecture(ppc64). As I dont have any Power machines I would like to
try it with ppc64 VM's. Is it possible to run these kind of VM's on my
OpenStack cluster(Queens) which runs on
On 11/8/2018 5:30 AM, Rambo wrote:
When I resize the instance, the compute node report that
"libvirtError: internal error: qemu unexpectedly closed the monitor:
2018-11-08T09:42:04.695681Z qemu-kvm: cannot set up guest memory
'pc.ram': Cannot allocate memory".Has anyone seen this
On 10/25/2018 12:00 PM, Jay Pipes wrote:
On 10/25/2018 01:38 PM, Chris Friesen wrote:
On 10/24/2018 9:10 AM, Jay Pipes wrote:
Nova's API has the ability to create "quota classes", which are
basically limits for a set of resource types. There is something
called the "default quo
On 10/24/2018 9:10 AM, Jay Pipes wrote:
Nova's API has the ability to create "quota classes", which are
basically limits for a set of resource types. There is something called
the "default quota class" which corresponds to the limits in the
CONF.quota section. Quota classes are basically
On 10/9/2018 1:20 PM, Jay Pipes wrote:
On 10/09/2018 11:04 AM, Balázs Gibizer wrote:
If you do the force flag removal in a nw microversion that also means
(at least to me) that you should not change the behavior of the force
flag in the old microversions.
Agreed.
Keep the old, buggy and
While discussing the "Add HPET timer support for x86 guests"
blueprint[1] one of the items that came up was how to represent what are
essentially flags that impact both scheduling and configuration. Eric
Fried posted a spec to start a discussion[2], and a number of nova
developers met on a
On 10/2/2018 4:15 PM, Giridhar Jayavelu wrote:
Hi,
Currently, all nova components are packaged in same helm chart "nova". Are
there any plans to separate nova-compute from rest of the services ?
What should be the approach for deploying multiple nova computes nodes using
OpenStack helm charts?
Hi,
At the PTG, it was suggested that each project should tag their bugs
with "-bug" to avoid tags being "leaked" across projects, or
something like that.
Could someone elaborate on why this was recommended? It seems to me
that it'd be better for all projects to just use the "bug" tag for
On 9/12/2018 12:04 PM, Doug Hellmann wrote:
This came up in a Vancouver summit session (the python3 one I think). General
consensus there seemed to be that we should have grenade jobs that run python2
on the old side and python3 on the new side and test the update from one to
another through
On 08/30/2018 11:03 AM, Jeremy Stanley wrote:
The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.
Do we want to merge usage and development onto one list? That could be a busy
list for someone who's
On 08/30/2018 11:03 AM, Jeremy Stanley wrote:
The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.
Do we want to merge usage and development onto one list? That could be a busy
list for someone who's
On 08/30/2018 11:03 AM, Jeremy Stanley wrote:
The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.
Do we want to merge usage and development onto one list? That could be a busy
list for someone who's
On 08/30/2018 08:54 AM, Eugen Block wrote:
Hi Jay,
You need to set your ram_allocation_ratio nova.CONF option to 1.0 if you're
running into OOM issues. This will prevent overcommit of memory on your
compute nodes.
I understand that, the overcommitment works quite well most of the time.
It
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).
I thought cold migration of instances with such devices was
On 08/21/2018 04:33 PM, melanie witt wrote:
If we separate into two different groups, all of the items I discussed in my
previous reply will become cross-project efforts. To me, this means that the
placement group will have their own priorities and goal setting process and if
their priorities
On 08/21/2018 01:53 PM, melanie witt wrote:
Given all of that, I'm not seeing how *now* is a good time to separate the
placement project under separate governance with separate goals and priorities.
If operators need things for compute, that are well-known and that placement was
created to
On 08/20/2018 11:44 AM, Zane Bitter wrote:
If you want my personal opinion then I'm a big believer in incremental change.
So, despite recognising that it is born of long experience of which I have been
blissfully mostly unaware, I have to disagree with Chris's position that if
anybody lets you
On 08/04/2018 05:18 PM, Matt Riedemann wrote:
On 8/3/2018 9:14 AM, Chris Friesen wrote:
I'm of two minds here.
On the one hand, you have the case where the end user has accidentally
requested some combination of things that isn't normally available, and they
need to be able to ask the provider
On 08/14/2018 10:33 AM, Tobias Urdin wrote:
My goal is that we will be able to swap to Storyboard during the Stein cycle but
considering that we have a low activity on
bugs my opinion is that we could do this swap very easily anything soon as long
as everybody is in favor of it.
Please let me
On 08/14/2018 10:33 AM, Tobias Urdin wrote:
My goal is that we will be able to swap to Storyboard during the Stein cycle but
considering that we have a low activity on
bugs my opinion is that we could do this swap very easily anything soon as long
as everybody is in favor of it.
Please let me
On 08/07/2018 07:29 AM, Matt Riedemann wrote:
On 8/7/2018 1:10 AM, Flint WALRUS wrote:
I didn’t had time to check StarlingX code quality, how did you feel it while
you were doing your analysis?
I didn't dig into the test diffs themselves, but it was my impression that from
what I was poking
On 08/13/2018 08:26 AM, Jay Pipes wrote:
On 08/13/2018 10:10 AM, Matthew Booth wrote:
I suspect I've misunderstood, but I was arguing this is an anti-goal.
There's no reason to do this if the db is working correctly, and it
would violate the principal of least surprise in dbs with legacy
On 08/13/2018 02:07 AM, Rambo wrote:
Hi,all
I find it is important that live-resize the instance in production
environment,especially live downsize the disk.And we have talked it many
years.But I don't know why the bp[1] didn't approved.Can you tell me more about
this ?Thank you very
On 08/02/2018 06:27 PM, Jay Pipes wrote:
On 08/02/2018 06:18 PM, Michael Glasgow wrote:
More generally, any time a service fails to deliver a resource which it is
primarily designed to deliver, it seems to me at this stage that should
probably be taken a bit more seriously than just "check
On 08/02/2018 01:04 PM, melanie witt wrote:
The problem is an infamous one, which is, your users are trying to boot
instances and they get "No Valid Host" and an instance in ERROR state. They
contact support, and now support is trying to determine why NoValidHost
happened. In the past, they
On 08/02/2018 04:10 AM, Chris Dent wrote:
When people ask for something like what Chris mentioned:
hosts with enough CPU:
hosts that also have enough disk:
hosts that also have enough memory:
hosts that also meet extra spec host aggregate keys:
hosts that also meet
On 08/01/2018 11:34 PM, Joshua Harlow wrote:
And I would be able to say request the explanation for a given request id
(historical even) so that analysis could be done post-change and pre-change (say
I update the algorithm for selection) so that the effects of alternations to
said decisions
On 08/01/2018 11:32 AM, melanie witt wrote:
I think it's definitely a significant issue that troubleshooting "No allocation
candidates returned" from placement is so difficult. However, it's not
straightforward to log detail in placement when the request for allocation
candidates is essentially
On 08/01/2018 11:17 AM, Ben Nemec wrote:
On 08/01/2018 11:23 AM, Chris Friesen wrote:
The fact that there is no real way to get the equivalent of the old detailed
scheduler logs is a known shortcoming in placement, and will become more of a
problem if/when we move more complicated things
On 08/01/2018 09:58 AM, Andrey Volkov wrote:
Hi,
It seems you need first to check what placement knows about resources of your
cloud.
This can be done either with REST API [1] or with osc-placement [2].
For osc-placement you could use:
pip install osc-placement
openstack allocation candidate
On 07/25/2018 06:22 PM, Alex Xu wrote:
2018-07-26 1:43 GMT+08:00 Chris Friesen mailto:chris.frie...@windriver.com>>:
Keypairs are weird in that they're owned by users, not projects. This is
arguably wrong, since it can cause problems if a user boots an in
On 07/25/2018 06:21 PM, Alex Xu wrote:
2018-07-26 0:29 GMT+08:00 William M Edmonds mailto:edmon...@us.ibm.com>>:
Ghanshyam Mann mailto:gm...@ghanshyammann.com>>
wrote on 07/25/2018 05:44:46 AM:
... snip ...
> 1. is it ok to show the keypair used info via API ? any original
On 07/25/2018 10:29 AM, William M Edmonds wrote:
Ghanshyam Mann wrote on 07/25/2018 05:44:46 AM:
... snip ...
> 1. is it ok to show the keypair used info via API ? any original
> rational not to do so or it was just like that from starting.
keypairs aren't tied to a tenant/project, so how
On 07/24/2018 12:47 PM, Clark Boylan wrote:
Can you get by with qemu or is nested virt required?
Pretty sure that nested virt is needed in order to test CPU pinning.
As for hugepages, I've done a quick survey of cpuinfo across our clouds and all
seem to have pse available but not all have
Hi,
I'm on a team that is starting to use StoryBoard, and I just thought I'd raise
some issues I've recently run into. It may be that I'm making assumptions based
on previous tools that I've used (Launchpad and Atlassian's Jira) and perhaps
StoryBoard is intended to be used differently, so
On 07/18/2018 03:43 PM, melanie witt wrote:
On Wed, 18 Jul 2018 15:14:55 -0500, Matt Riedemann wrote:
On 7/18/2018 1:13 PM, melanie witt wrote:
Can we get rid of multi-create? It keeps causing complications, and
it already
has weird behaviour if you ask for min_count=X and max_count=Y and
On 07/18/2018 10:14 AM, Matt Riedemann wrote:
As can be seen from logstash [1] this bug is hurting us pretty bad in the check
queue.
I thought I originally had this fixed with [2] but that turned out to only be
part of the issue.
I think I've identified the problem but I have failed to write a
On 07/10/2018 03:04 AM, jayshankar nair wrote:
Hi,
I am trying to create an instance of cirros os(Project/Compute/Instances). I am
getting the following error.
Error: Failed to perform requested operation on instance "cirros1", the instance
has an error status: Please try again later [Error:
On 06/23/2018 08:38 AM, Volodymyr Litovka wrote:
Dear friends,
I did some tests with making volume available without stopping VM. I'm using
CEPH and these steps produce the following results:
1) openstack volume set --state available [UUID]
- nothing changed inside both VM (volume is still
On 06/21/2018 07:04 AM, Artom Lifshitz wrote:
As I understand it, Artom is proposing to have a larger race window,
essentially
from when the scheduler selects a node until the resource audit runs on that
node.
Exactly. When writing the spec I thought we could just call the
On 06/21/2018 07:50 AM, Mooney, Sean K wrote:
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Side question... does either approach touch PCI device management
during live migration?
I ask because the only workloads I've ever seen that pin guest vCPU
threads to
On 06/20/2018 10:00 AM, Sylvain Bauza wrote:
When we reviewed the spec, we agreed as a community to say that we should still
get race conditions once the series is implemented, but at least it helps
operators.
Quoting :
"It would also be possible for another instance to steal NUMA resources
On 06/19/2018 01:59 PM, Artom Lifshitz wrote:
Adding
claims support later on wouldn't change any on-the-wire messaging, it would
just make things work more robustly.
I'm not even sure about that. Assuming [1] has at least the right
idea, it looks like it's an either-or kind of thing: either we
On 06/18/2018 08:16 AM, Artom Lifshitz wrote:
Hey all,
For Rocky I'm trying to get live migration to work properly for
instances that have a NUMA topology [1].
A question that came up on one of patches [2] is how to handle
resources claims on the destination, or indeed whether to handle that
On 06/13/2018 07:58 AM, Blair Bethwaite wrote:
Is the collective wisdom to use LVM based instances for these use-cases? Putting
a host filesystem with qcow2 based disk images on it can't help
performance-wise... Though we have not used LVM based instance storage before,
are there any
On 06/07/2018 12:07 PM, Matt Riedemann wrote:
On 6/7/2018 12:56 PM, melanie witt wrote:
C) Create a configurable API limit for maximum number of volumes to attach to
a single instance that is either a quota or similar to a quota. Pros: lets
operators opt-in to a maximum that works in their
On 06/04/2018 05:43 AM, Tobias Urdin wrote:
Hello,
I have received a question about a more specialized use case where we need to
isolate several hypervisors to a specific project. My first thinking was
using nova flavors for only that project and add extra specs properties to
use a specific
On 05/31/2018 04:14 PM, Moore, Curt wrote:
The challenge is that transferring the Glance image transfer is _glacially slow_
when using the Glance HTTP API (~30 min for a 50GB Windows image (It’s Windows,
it’s huge with all of the necessary tools installed)). If libvirt can instead
perform an
I think it'd be worth filing a bug against the "openstack" client...most of the
clients try to be compatible with any server version.
Probably best to include the details from the run with the --debug option for
both the new and old version of the client.
Chris
On 05/29/2018 10:36 AM, Ken
On 05/19/2018 05:58 PM, Blair Bethwaite wrote:
G'day Jay,
On 20 May 2018 at 08:37, Jay Pipes wrote:
If it's not the VM or baremetal machine that is using the accelerator, what
is?
It will be a VM or BM, but I don't think accelerators should be tied
to the life of a
Are you talking about downtime of instances (and the dataplane), or of the
OpenStack API and control plane?
And when you say "zero downtime" are you really talking about "five nines" or
similar? Because nothing is truly zero downtime.
If you care about HA then you'll need additional
On 05/13/2018 09:23 PM, 何健乐 wrote:
Hi, all
When I did live-miration , I met the following error: |result
||=||proxy_call(||self||._autowrap, f, ||*||args, ||*||*||kwargs)|
|May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: ||File|
|"/usr/lib64/python2.7/site-packages/libvirt.py"||,
On 05/11/2018 10:30 AM, Remo Mattei wrote:
Hello guys, I have a need now to get a Windows VM into the OpenStack
deployment. Can anyone suggest the best way to do this. I have done mostly
Linux. I could use the ISO and build one within OpenStack not sure I want to go
that route. I have some
On 05/04/2018 07:50 AM, Matt Riedemann wrote:
For full details on this, see the IRC conversation [1].
tl;dr: the nova compute manager and xen virt driver assume that you can reboot a
rescued instance [2] but the API does not allow that [3] and as far as I can
tell, it never has.
I can only
On 04/22/2018 12:57 AM, Fabian Zimmermann wrote:
Hi,
just create an empty image (Without file or location param), then use
add-location to set your locations.
I was under the impression that the V2 API didn't let you update the location
unless the "show_multiple_locations" config option was
On 04/20/2018 01:48 AM, Adrian Turjak wrote:
What version of the SDK are you using?
Originally I just used what was installed in my devstack VM, which seems to be
0.9.17. Upgrading to 0.12.0 allowed it to work.
Thanks,
Chris
___
Mailing list:
On 04/19/2018 09:19 PM, Adrian Turjak wrote:
On 20/04/18 01:46, Chris Friesen wrote:
On 04/19/2018 07:01 AM, Jeremy Stanley wrote:
Or, for that matter, leverage OpenStackSDK's ability to pass
arbitrary calls to individual service APIs when you need something
not exposed by the porcelain
On 04/19/2018 08:33 AM, Jay Pipes wrote:
On 04/19/2018 09:15 AM, Matthew Booth wrote:
We've had inconsistent naming of recreate/evacuate in Nova for a long
time, and it will persist in a couple of places for a while more.
However, I've proposed the following to rename 'recreate' to
'evacuate'
On 04/19/2018 07:01 AM, Jeremy Stanley wrote:
On 2018-04-19 12:24:48 +1000 (+1000), Joshua Hesketh wrote:
There is also nothing stopping you from using both. For example,
you could use the OpenStack SDK for most things but if you hit an
edge case where you need something specific you can then
On 04/18/2018 10:57 AM, Jay Pipes wrote:
On 04/18/2018 12:41 PM, Matt Riedemann wrote:
There is a compute REST API change proposed [1] which will allow users to pass
trusted certificate IDs to be used with validation of images when creating or
rebuilding a server. The trusted cert IDs are based
On 04/18/2018 09:17 AM, Artom Lifshitz wrote:
To that end, we'd like to know what filters operators are enabling in
their deployment. If you can, please reply to this email with your
[filter_scheduler]/enabled_filters (or
[DEFAULT]/scheduler_default_filters if you're using an older version)
On 04/18/2018 09:58 AM, Matt Riedemann wrote:
On 4/18/2018 9:06 AM, Jay Pipes wrote:
"By default, should resources/traits submitted in different numbered request
groups be supplied by separate resource providers?"
Without knowing all of the hairy use cases, I'm trying to channel my inner
On 04/18/2018 08:06 AM, Jay Pipes wrote:
Stackers,
Eric Fried and I are currently at an impasse regarding a decision that will have
far-reaching (and end-user facing) impacts to the placement API and how nova
interacts with the placement service from the nova scheduler.
We need to make a
On 04/17/2018 07:13 AM, Jeremy Stanley wrote:
The various "client libraries" (e.g. python-novaclient,
python-cinderclient, et cetera) can also be used to that end, but
are mostly for service-to-service communication these days, aren't
extremely consistent with each other, and tend to eventually
On 04/03/2018 04:25 AM, Xiong, Huan wrote:
Hi,
I'm using a cloud benchmarking tool [1], which creates a *single* nova
client object in main thread and invoke methods on that object in different
worker threads. I find it generated malformed requests at random (my
system has python-novaclient
On 03/27/2018 10:42 AM, Matt Riedemann wrote:
On 3/27/2018 10:37 AM, Jay Pipes wrote:
If we want to actually fix the issue once and for all, we need to make
availability zones a real thing that has a permanent identifier (UUID) and
store that permanent identifier in the instance (not the
On 03/21/2018 08:17 PM, Tyler Bishop wrote:
We've been fighting a constant clock skew issue lately on 4 of our clusters.
They all use NTP but seem to go into WARN every 12 hours or so.
Anyone else experiencing this?
What clock are you using in the guest?
Chris
On 03/07/2018 06:10 PM, Lance Bragstad wrote:
The keystone team is parsing the unified limits discussions from last
week. One of the things we went over as a group was the usability of the
current API [0].
Currently, the create and update APIs support batch processing. So
specifying a list of
On 03/07/2018 10:44 AM, Tim Bell wrote:
I think nested quotas would give the same thing, i.e. you have a parent project
for the group and child projects for the users. This would not need user/group
quotas but continue with the ‘project owns resources’ approach.
Agreed, I think that if we
On 03/07/2018 10:44 AM, Tim Bell wrote:
I think nested quotas would give the same thing, i.e. you have a parent project
for the group and child projects for the users. This would not need user/group
quotas but continue with the ‘project owns resources’ approach.
Agreed, I think that if we
On 03/07/2018 09:49 AM, Lance Bragstad wrote:
On 03/07/2018 09:31 AM, Chris Friesen wrote:
On 03/07/2018 08:58 AM, Lance Bragstad wrote:
Hi all,
Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
On 03/07/2018 10:33 AM, Tim Bell wrote:
Sorry, I remember more detail now... it was using the 'owner' of the VM as part
of the policy rather than quota.
Is there a per-user/per-group quota in Nova?
Nova supports setting quotas for individual users within a project (as long as
they are
On 03/07/2018 08:58 AM, Lance Bragstad wrote:
Hi all,
Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
me know if you have any questions or concerns about the library. If the
oslo team would like, I
On 02/21/2018 03:19 PM, Kwan, Louie wrote:
When turning off a VM by doing nova stop, the Status and Task State is there
for Nova. But can Libvirt / qemu programmatically figure out the ‘Task State’
that the VM is trying to powering-off ?.
For libvirt, it seems only know the “Power State”?
On 02/13/2018 09:32 AM, Vincent Godin wrote:
When creating a image, in metadata "libvirt Driver Options", it's just
possible set the "hw_scsi_model" to "virtio-scsi" but there is no way
to set the number of queues. As this is a big factor of io
improvement, why this option is still not available
On 02/05/2018 06:33 PM, Jay Pipes wrote:
It does seem to me, however, that if the intention is *not* to get into the
multi-cloud orchestration game, that a simpler solution to this multi-region
OpenStack deployment use case would be to simply have a global Glance and
Keystone infrastructure
On 01/30/2018 09:15 AM, Matt Riedemann wrote:
The 10.0.0 release of python-novaclient dropped some deprecated CLIs and python
API bindings for the server actions to add/remove fixed and floating IPs:
https://docs.openstack.org/releasenotes/python-novaclient/queens.html#id2
On 01/29/2018 07:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0 value is found
in the nova.conf file on the nova-compute worker, then instead of defaulting to
16.0, the resource tracker would first look to see if the compute node was
associated with a
On 01/18/2018 02:54 PM, Mathieu Gagné wrote:
We use this feature to segregate capacity/hosts based on CPU
allocation ratio using aggregates.
This is because we have different offers/flavors based on those
allocation ratios. This is part of our business model.
A flavor extra_specs is use to
On 11/24/2017 10:23 AM, Julia Kreger wrote:
Greetings Michael,
I believe It would need to involve multiple machines at the same time.
I guess there are two different approaches that I think _could_ be
taken to facilitate this:
1) Provide a facility to use a specific volume as the "golden
On 11/14/2017 02:10 PM, Doug Hellmann wrote:
Excerpts from Chris Friesen's message of 2017-11-14 14:01:58 -0600:
On 11/14/2017 01:28 PM, Dmitry Tantsur wrote:
The quality of backported fixes is expected to be a direct (and only?)
interest of those new teams of new cores, coming from users and
On 11/14/2017 01:28 PM, Dmitry Tantsur wrote:
The quality of backported fixes is expected to be a direct (and only?)
interest of those new teams of new cores, coming from users and operators and
vendors.
I'm not assuming bad intentions, not at all. But there is a lot of involved in a
decision
On 11/14/2017 10:25 AM, Doug Hellmann wrote:
Why
would we have third-party jobs on an old branch that we don't have on
master, for instance?
One possible reason is to test the stable version of OpenStack against a stable
version of the underlying OS distro. (Where that distro may not meet
On 10/31/2017 01:13 AM, haad wrote:
Hi,
We have an OSA installation with 10-12 compute nodes running Mitaka on Ubuntu
16.04. As initially we have not prepared any long term update strategy we would
like to create one now. Plan would be to upgrade it to new OSA
release(Ocata/Pike/Queens) in near
On 11/02/2017 01:03 AM, Chris wrote:
Hello,
When we shut down a compute node the instances running on it get suspended. This
generates some difficulties with some applications like RabbitMQ dont like to be
suspended. Is there a way to change this behavior so that the running instances
gets
On 11/02/2017 08:48 AM, Mike Lowe wrote:
After moving from CentOS 7.3 to 7.4, I’ve had trouble getting live migration to
work when a volume is attached. As it turns out when a live migration takes
place the libvirt driver rewrites portions of the xml definition for the
destination hypervisor
On 10/18/2017 11:37 AM, Chris Apsey wrote:
All,
I'm working to add baremetal provisioning to an already-existing libvirt (kvm)
deployment. I was under the impression that our currently-existing endpoints
that already run nova-conductor/nova-scheduler/etc. can be modified to support
both kvm
On 10/16/2017 09:22 AM, Matt Riedemann wrote:
2. Should we null out the instance.availability_zone when it's shelved offloaded
like we do for the instance.host and instance.node attributes? Similarly, we
would not take into account the RequestSpec.availability_zone when scheduling
during
On 10/06/2017 11:32 AM, Mathieu Gagné wrote:
Why not supporting this use case?
I don't think anyone is suggesting we support do it, but nobody has stepped up
to actually merge a change that implements it.
I think what Matt is suggesting is that we make it fail fast *now*, and if
someone
On 10/05/2017 03:47 AM, Kekane, Abhishek wrote:
So the question here is, what is the exact goal of
AggregateImagePropertiesIsolation' scheduler filter: - Is it one of the
following:-
1. Matching all metadata of host aggregate with image properties.
2. Matching image properties with host
On 10/03/2017 11:12 AM, Clint Byrum wrote:
My personal opinion is that rebuild is an anti-pattern for cloud, and
should be frozen and deprecated. It does nothing but complicate Nova
and present challenges for scaling.
That said, if it must stay as a feature, I don't think updating the
On 09/28/2017 05:29 AM, Sahid Orentino Ferdjaoui wrote:
Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.
What I see in the code/docs doesn't
On 09/27/2017 04:55 PM, Blair Bethwaite wrote:
Hi Prema
On 28 September 2017 at 07:10, Premysl Kouril wrote:
Hi, I work with Jakub (the op of this thread) and here is my two
cents: I think what is critical to realize is that KVM virtual
machines can have substantial
On 09/27/2017 03:10 PM, Premysl Kouril wrote:
Lastly, qemu has overhead that varies depending on what you're doing in the
guest. In particular, there are various IO queues that can consume
significant amounts of memory. The company that I work for put in a good
bit of effort engineering things
On 09/27/2017 08:01 AM, Blair Bethwaite wrote:
On 27 September 2017 at 23:19, Jakub Jursa wrote:
'hw:cpu_policy=dedicated' (while NOT setting 'hw:numa_nodes') results in
libvirt pinning CPU in 'strict' memory mode
(from libvirt xml for given instance)
...
On 09/27/2017 03:12 AM, Jakub Jursa wrote:
On 27.09.2017 10:40, Blair Bethwaite wrote:
On 27 September 2017 at 18:14, Stephen Finucane wrote:
What you're probably looking for is the 'reserved_host_memory_mb' option. This
defaults to 512 (at least in the latest master)
On 09/20/2017 12:47 PM, Matt Riedemann wrote:
I wanted to bring it up here in case anyone had a good reason why we should not
continue to exclude originally failed hosts during live migration, even if the
admin is specifying one of those hosts for the live migration destination.
Presumably
On 09/20/2017 08:59 AM, Steven D. Searles wrote:
Done, thanks for the assistance Chris and everyone.
https://bugs.launchpad.net/nova/+bug/1718455
I pinged the nova devs and mriedem suggested a fix you might want to try. In
nova/scheduler/filter_scheduler.py, function select_destinations(),
On 09/19/2017 05:21 PM, Steven D. Searles wrote:
Hello everyone and thanks in advance. I have Openstack Pike (KVM,FC-SAN/Cinder)
installed in our lab for testing before upgrade and am seeing a possible issue
with disabling a host and live migrating the instances off via the horizon
interface. I
On 09/07/2017 02:27 PM, Sahid Orentino Ferdjaoui wrote:
On Wed, Sep 06, 2017 at 11:57:25PM -0400, Jay Pipes wrote:
Sahid, Stephen, what are your thoughts on this?
On 09/06/2017 10:17 PM, Yaguang Tang wrote:
I think the fact that RamFilter can't deal with huge pages is a bug ,
duo to this
1 - 100 of 658 matches
Mail list logo