open+project:openstack/nova+branch:master+topic:bp/numa-aware-live-migration
> [3] https://review.openstack.org/#/c/576602/
> [4] https://review.openstack.org/#/c/574871/6
--
--
Artom Lifshitz
Software Engineer, OpenStack Compute DFG
Yes, this is still happening. Mea culpa for not carrying the ball and
maintaining visibility. There's work in nova to actually get it
working, and in intel-nfv-ci to lay down the groundwork for eventual
CI.
In nova, the spec has been re-proposed for Stein [1]. There are some
differences from the
https://anticdent.org/
> freenode: cdent tw:
> @anticdent__
> OpenStack Development Mailing List (not for usage q
On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan wrote:
> On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote:
>> Hey all,
>>
>> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
>>
>> Intel has their NFV tests tempest plugin [1] and manages
Hey all,
tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
Intel has their NFV tests tempest plugin [1] and manages a third party
CI for Nova. Two of the cores on that project (Stephen Finucane and
Sean Mooney) have now moved to Red Hat, but the point still stands
that there's a
I've proposed [1] to add extra logging on the Nova side. Let's see if
that helps us catch the root cause of this.
[1] https://review.openstack.org/584032
On Thu, Jul 19, 2018 at 12:50 PM, Artom Lifshitz wrote:
> Because we're waiting for the volume to become available before we
> co
-19_10_06_09_273919
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http:
> Side question... does either approach touch PCI device management during
> live migration?
Nope. I'd need to do some research to see what, if anything, is needed
at the lower levels (kernel, libvirt) to enable this.
> I ask because the only workloads I've ever seen that pin guest vCPU threads
>
> As I understand it, Artom is proposing to have a larger race window,
> essentially
> from when the scheduler selects a node until the resource audit runs on
> that node.
>
Exactly. When writing the spec I thought we could just call the resource
tracker to claim the resources when the
> Adding
> claims support later on wouldn't change any on-the-wire messaging, it would
> just make things work more robustly.
I'm not even sure about that. Assuming [1] has at least the right
idea, it looks like it's an either-or kind of thing: either we use
resource tracker claims and get the
> For what it's worth, I think the previous patch languished for a number of
> reasons other than the complexity of the code...the original author left,
> the coding style was a bit odd, there was an attempt to make it work even if
> the source was an earlier version, etc. I think a fresh
Hey all,
For Rocky I'm trying to get live migration to work properly for
instances that have a NUMA topology [1].
A question that came up on one of patches [2] is how to handle
resources claims on the destination, or indeed whether to handle that
at all.
The previous attempt's approach [3]
ttach volume to server 2
>>
>> Combined with a patch to nova to disallow swap_volume on any
>> multiattach volume, this would then be possible if inconvenient.
>>
>> Regardless of any other changes, though, I think it's urgent that we
>> disable the ability to swap_volume a multiattach volume because we
>> don't
> On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote:
>
> :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote:
> :> One idea would be that, once the meat of the patch
> :> has passed multiple rounds of reviews and looks good, and what remains
> :>
> -Julia
> -
> [1]: https://review.openstack.org/570940
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/l
ecsFilter
> ImagePropertiesFilter
> ServerGroupAntiAffinityFilter
> SameHostFilter
>
> Cheers,
> Lingxian Kong
>
>
> On Sat, Apr 28, 2018 at 3:04 AM Jim Rollenhagen <j...@jimrollenhagen.com>
> wrote:
>>
>> On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz <alifs...@redhat.c
Hi all,
A CI issue [1] caused by tempest thinking some filters are enabled
when they're really not, and a proposed patch [2] to add
(Same|Different)HostFilter to the default filters as a workaround, has
led to a discussion about what filters should be enabled by default in
nova.
The default
> That is easier said than done. There have been a couple of related attempts
> in the past:
>
> https://review.openstack.org/#/c/266425/
>
> https://review.openstack.org/#/c/282872/
>
> I don't remember exactly where those fell down, but it's worth looking at
> this first before trying to do this
> - virtio-vsock - think of this as UNIX domain sockets between the host and
>guest. This is to deal with the valid use case of people wanting to use
>a network protocol, but not wanting an real NIC exposed to the guest/host
>for security concerns. As such I think it'd be useful to
> But before doing that though, I think it'd be worth understanding whether
> metadata-over-vsock support would be acceptable to people who refuse
> to deploy metadata-over-TCPIP today.
I wrote a thing [1], let's see what happens.
[1]
>> But before doing that though, I think it'd be worth understanding whether
>> metadata-over-vsock support would be acceptable to people who refuse
>> to deploy metadata-over-TCPIP today.
>
> Sure, although I'm still concerned that it'll effectively make tagged
> hotplug libvirt-only.
Upon
> But before doing that though, I think it'd be worth understanding whether
> metadata-over-vsock support would be acceptable to people who refuse
> to deploy metadata-over-TCPIP today.
Sure, although I'm still concerned that it'll effectively make tagged
hotplug libvirt-only.
to be more deployable, both in terms of security, and IPv6
support?
[1] https://review.openstack.org/#/c/333781/
On Mon, Feb 20, 2017 at 10:35 AM, Clint Byrum <cl...@fewbar.com> wrote:
> Excerpts from Jay Pipes's message of 2017-02-20 10:00:06 -0500:
>> On 02/17/2017 02:28 PM, Artom
19, 2017 at 6:12 AM, Steve Gordon <sgor...@redhat.com> wrote:
> - Original Message -
> > From: "Artom Lifshitz" <alifs...@redhat.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.opensta
m
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
In reply to Michael:
> We have had this discussion several times in the past for other reasons. The
> reality is that some people will never deploy the metadata API, so I feel
> like we need a better solution than what we have now.
Aha, that's definitely a good reason to continue making the
houghts into writing, and
will be able to refer to them later on :)
[1]
https://review.openstack.org/#/q/status:open+topic:bp/virt-device-tagged-attach-detach
[2]
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2667-L267
7 8:32 PM, Artom Lifshitz wrote:
>>
>> Since the consensus is to fix this with a new microversion, I've
>> submitted some patches:
>>
>> * https://review.openstack.org/#/c/426030/
>> A spec for the new microversion in case folks want one.
>
>
> Mer
: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
--
Artom Lifshitz
__
OpenStack Development Mailing List (not for usage questions)
Unsubs
> So the current API behavior is as below:
>
> 2.32: BDM tag and network device tag added.
> 2.33 - 2.36: 'tag' in the BDM disappeared. The network device tag still
> works.
> 2.37: The network device tag disappeared also.
Thanks for the summary. For the visual minded like me, I made some
ASCII
n issue
> alone is enough of a reason todo this.
>
> Regards,
> Daniel
> --
> |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org -o-
> The Hyper-V implementation of the bp virt-device-role-tagging is mergeable
> [1]. The patch is quite simple, it got some reviews, and the tempest test
> test_device_tagging [2] passed. [3]
>
> [1] https://review.openstack.org/#/c/331889/
> [2] https://review.openstack.org/#/c/305120/
> [3]
/327920/
--
Artom Lifshitz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Hello,
I'd like to get the conversation started around a spec that my colleague Dan
Berrange has proposed to the backlog.
The spec [1] solves the problem of passing information about virtual devices
into an instance.
For example, in an instance with multiple network interfaces, each connected
Hello,
I'd like to gauge acceptance of introducing a feature that would give operators
a config option to perform real database deletes instead of soft deletes.
There's definitely a need for *something* that cleans up the database. There
have been a few attempts at a DB purge engine
35 matches
Mail list logo