Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-12-09 Thread Joe Gordon
On Tue, Nov 25, 2014 at 2:01 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Daniel P. Berrange berra...@redhat.com
  To: Dan Smith d...@danplanet.com
 
  On Thu, Nov 13, 2014 at 05:43:14PM +, Daniel P. Berrange wrote:
   On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
 That sounds like something worth exploring at least, I didn't know
 about that kernel build option until now :-) It sounds like it
 ought
 to be enough to let us test the NUMA topology handling, CPU pinning
 and probably huge pages too.
   
Okay. I've been vaguely referring to this as a potential test vector,
but only just now looked up the details. That's my bad :)
   
 The main gap I'd see is NUMA aware PCI device assignment since the
 PCI - NUMA node mapping data comes from the BIOS and it does not
 look like this is fakeable as is.
   
Yeah, although I'd expect that the data is parsed and returned by a
library or utility that may be a hook for fakeification. However, it
 may
very well be more trouble than it's worth.
   
I still feel like we should be able to test generic PCI in a similar
 way
(passing something like a USB controller through to the guest, etc).
However, I'm willing to believe that the intersection of PCI and
 NUMA is
a higher order complication :)
  
   Oh I forgot to mention with PCI device assignment (as well as having a
   bunch of PCI devices available[1]), the key requirement is an IOMMU.
   AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
   out of luck for even basic PCI assignment testing inside VMs.
 
  Ok, turns out that wasn't entirely accurate in general.
 
  KVM *can* emulate an IOMMU, but it requires that the guest be booted
  with the q35 machine type, instead of the ancient PIIX4 machine type,
  and also QEMU must be launched with -machine iommu=on. We can't do
  this in Nova, so although it is theoretically possible, it is not
  doable for us in reality :-(
 
  Regards,
  Daniel

 Is it worth still pursuing virtual testing of the NUMA awareness work you,
 nikola, and others have been doing? It seems to me it would still be
 preferable to do this virtually (and ideally in the gate) wherever possible?


The more we can test in the gate and without real hardware the better.



 Thanks,

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-25 Thread Steve Gordon
- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: Dan Smith d...@danplanet.com
 
 On Thu, Nov 13, 2014 at 05:43:14PM +, Daniel P. Berrange wrote:
  On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
That sounds like something worth exploring at least, I didn't know
about that kernel build option until now :-) It sounds like it ought
to be enough to let us test the NUMA topology handling, CPU pinning
and probably huge pages too.
   
   Okay. I've been vaguely referring to this as a potential test vector,
   but only just now looked up the details. That's my bad :)
   
The main gap I'd see is NUMA aware PCI device assignment since the
PCI - NUMA node mapping data comes from the BIOS and it does not
look like this is fakeable as is.
   
   Yeah, although I'd expect that the data is parsed and returned by a
   library or utility that may be a hook for fakeification. However, it may
   very well be more trouble than it's worth.
   
   I still feel like we should be able to test generic PCI in a similar way
   (passing something like a USB controller through to the guest, etc).
   However, I'm willing to believe that the intersection of PCI and NUMA is
   a higher order complication :)
  
  Oh I forgot to mention with PCI device assignment (as well as having a
  bunch of PCI devices available[1]), the key requirement is an IOMMU.
  AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
  out of luck for even basic PCI assignment testing inside VMs.
 
 Ok, turns out that wasn't entirely accurate in general.
 
 KVM *can* emulate an IOMMU, but it requires that the guest be booted
 with the q35 machine type, instead of the ancient PIIX4 machine type,
 and also QEMU must be launched with -machine iommu=on. We can't do
 this in Nova, so although it is theoretically possible, it is not
 doable for us in reality :-(
 
 Regards,
 Daniel

Is it worth still pursuing virtual testing of the NUMA awareness work you, 
nikola, and others have been doing? It seems to me it would still be preferable 
to do this virtually (and ideally in the gate) wherever possible?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-18 Thread Joe Gordon
On Mon, Nov 17, 2014 at 1:29 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 12 November 2014 11:11, Steve Gordon sgor...@redhat.com wrote:

 NUMA
 

 We still need to identify some hardware to run third party CI for the
 NUMA-related work, and no doubt other things that will come up. It's
 expected that this will be an interim solution until OPNFV resources can be
 used (note cdub jokingly replied 1-2 years when asked for a rough
 estimate - I mention this because based on a later discussion some people
 took this as a serious estimate).

 Ian did you have any luck kicking this off? Russell and I are also
 endeavouring to see what we can do on our side w.r.t. this short term
 approach - in particular if you find hardware we still need to find an
 owner to actually setup and manage it as discussed.



 In theory to get started we need a physical multi-socket box and a virtual
 machine somewhere on the same network to handle job control etc. I believe
 the tests themselves can be run in VMs (just not those exposed by existing
 public clouds) assuming a recent Libvirt and an appropriately crafted
 Libvirt XML that ensures the VM gets a multi-socket topology etc. (we can
 assist with this).


 With apologies for the late reply, but I was off last week.  And because I
 was off last week I've not done anything about this so far.

 I'm assuming that we'll just set up one physical multisocket box and
 ensure that we can do a cleanup-deploy cycle so that we can run whatever
 x86-dependent but otherwise relatively hardware agnostic tests we might
 need.  Seems easier than worrying about what libvirt and KVM do and don't
 support at a given moment in time.

 I'll go nag our lab people for the machines.  I'm thinking for the
 cleanup-deploy that I might just try booting the physical machine into a
 RAM root disk and then running a devstack setup, as it's probably faster
 than a clean install, but I'm open to options.  (There's quite a lot of
 memory in the servers we have so this is likely to work fine.)

 That aside, where are the tests going to live?


Great question, I am thinking these tests are a good candidate for
functional (devstack) based tests that live in the nova tree.



 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-17 Thread Ian Wells
On 12 November 2014 11:11, Steve Gordon sgor...@redhat.com wrote:

NUMA
 

 We still need to identify some hardware to run third party CI for the
 NUMA-related work, and no doubt other things that will come up. It's
 expected that this will be an interim solution until OPNFV resources can be
 used (note cdub jokingly replied 1-2 years when asked for a rough
 estimate - I mention this because based on a later discussion some people
 took this as a serious estimate).

 Ian did you have any luck kicking this off? Russell and I are also
 endeavouring to see what we can do on our side w.r.t. this short term
 approach - in particular if you find hardware we still need to find an
 owner to actually setup and manage it as discussed.



In theory to get started we need a physical multi-socket box and a virtual
 machine somewhere on the same network to handle job control etc. I believe
 the tests themselves can be run in VMs (just not those exposed by existing
 public clouds) assuming a recent Libvirt and an appropriately crafted
 Libvirt XML that ensures the VM gets a multi-socket topology etc. (we can
 assist with this).


With apologies for the late reply, but I was off last week.  And because I
was off last week I've not done anything about this so far.

I'm assuming that we'll just set up one physical multisocket box and ensure
that we can do a cleanup-deploy cycle so that we can run whatever
x86-dependent but otherwise relatively hardware agnostic tests we might
need.  Seems easier than worrying about what libvirt and KVM do and don't
support at a given moment in time.

I'll go nag our lab people for the machines.  I'm thinking for the
cleanup-deploy that I might just try booting the physical machine into a
RAM root disk and then running a devstack setup, as it's probably faster
than a clean install, but I'm open to options.  (There's quite a lot of
memory in the servers we have so this is likely to work fine.)

That aside, where are the tests going to live?
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Wed, Nov 12, 2014 at 03:48:47PM -0500, Russell Bryant wrote:
 On 11/12/2014 02:11 PM, Steve Gordon wrote:
  NUMA
  
  
  We still need to identify some hardware to run third party CI for the
  NUMA-related work, and no doubt other things that will come up. It's
  expected that this will be an interim solution until OPNFV resources
  can be used (note cdub jokingly replied 1-2 years when asked for a
  rough estimate - I mention this because based on a later discussion
  some people took this as a serious estimate).
  
  Ian did you have any luck kicking this off? Russell and I are also
  endeavouring to see what we can do on our side w.r.t. this short term
  approach - in particular if you find hardware we still need to find
  an owner to actually setup and manage it as discussed.
  
  In theory to get started we need a physical multi-socket box and a
  virtual machine somewhere on the same network to handle job control
  etc. I believe the tests themselves can be run in VMs (just not those
  exposed by existing public clouds) assuming a recent Libvirt and an
  appropriately crafted Libvirt XML that ensures the VM gets a
  multi-socket topology etc. (we can assist with this).
 
 I just wanted to clarify the hardware requirement.  A minimal setup for
 a first step can be just a single physical multi-socket machine.  We can
 run a VM on that machine for control and create ephemeral VMs with numa
 exposed to them for running the tests.

Yep, it is possible to run the tests inside VMs - the key is that when
you create the VMs you need to be able to give them NUMA topology. This
is possible if you're creating your VMs using virt-install, but not if
you're creating your VMs in a cloud.

 Note that in addition to setting up and maintaining the infrastructure,
 we also need someone to write test cases.

See also

  https://review.openstack.org/#/c/131818/
  
  
http://docs-draft.openstack.org/18/131818/4/check/gate-nova-docs/e8b8b8e/doc/build/html/devref/testing/libvirt-numa.html

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Dan Smith
 Yep, it is possible to run the tests inside VMs - the key is that when
 you create the VMs you need to be able to give them NUMA topology. This
 is possible if you're creating your VMs using virt-install, but not if
 you're creating your VMs in a cloud.

I think we should explore this a bit more. AFAIK, we can simulate a NUMA
system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
kernel. From a quick check with some RAX folks, we should have enough
control to arrange this. Since we can put a custom kernel (and
parameters) into our GRUB configuration that pygrub should honor, I
would think we could get a fake-NUMA guest running in at least one
public cloud. Since HP's cloud runs KVM, I would assume we have control
over our kernel and boot there as well.

Is there something I'm missing about why that's not doable?

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 09:28:01AM -0800, Dan Smith wrote:
  Yep, it is possible to run the tests inside VMs - the key is that when
  you create the VMs you need to be able to give them NUMA topology. This
  is possible if you're creating your VMs using virt-install, but not if
  you're creating your VMs in a cloud.
 
 I think we should explore this a bit more. AFAIK, we can simulate a NUMA
 system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
 kernel. From a quick check with some RAX folks, we should have enough
 control to arrange this. Since we can put a custom kernel (and
 parameters) into our GRUB configuration that pygrub should honor, I
 would think we could get a fake-NUMA guest running in at least one
 public cloud. Since HP's cloud runs KVM, I would assume we have control
 over our kernel and boot there as well.
 
 Is there something I'm missing about why that's not doable?

That sounds like something worth exploring at least, I didn't know about
that kernel build option until now :-) It sounds like it ought to be enough
to let us test the NUMA topology handling, CPU pinning and probably huge
pages too. The main gap I'd see is NUMA aware PCI device assignment
since the PCI - NUMA node mapping data comes from the BIOS and it does
not look like this is fakeable as is.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Dan Smith
 That sounds like something worth exploring at least, I didn't know
 about that kernel build option until now :-) It sounds like it ought
 to be enough to let us test the NUMA topology handling, CPU pinning
 and probably huge pages too.

Okay. I've been vaguely referring to this as a potential test vector,
but only just now looked up the details. That's my bad :)

 The main gap I'd see is NUMA aware PCI device assignment since the
 PCI - NUMA node mapping data comes from the BIOS and it does not
 look like this is fakeable as is.

Yeah, although I'd expect that the data is parsed and returned by a
library or utility that may be a hook for fakeification. However, it may
very well be more trouble than it's worth.

I still feel like we should be able to test generic PCI in a similar way
(passing something like a USB controller through to the guest, etc).
However, I'm willing to believe that the intersection of PCI and NUMA is
a higher order complication :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
  That sounds like something worth exploring at least, I didn't know
  about that kernel build option until now :-) It sounds like it ought
  to be enough to let us test the NUMA topology handling, CPU pinning
  and probably huge pages too.
 
 Okay. I've been vaguely referring to this as a potential test vector,
 but only just now looked up the details. That's my bad :)
 
  The main gap I'd see is NUMA aware PCI device assignment since the
  PCI - NUMA node mapping data comes from the BIOS and it does not
  look like this is fakeable as is.
 
 Yeah, although I'd expect that the data is parsed and returned by a
 library or utility that may be a hook for fakeification. However, it may
 very well be more trouble than it's worth.
 
 I still feel like we should be able to test generic PCI in a similar way
 (passing something like a USB controller through to the guest, etc).
 However, I'm willing to believe that the intersection of PCI and NUMA is
 a higher order complication :)

Oh I forgot to mention with PCI device assignment (as well as having a
bunch of PCI devices available[1]), the key requirement is an IOMMU.
AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
out of luck for even basic PCI assignment testing inside VMs.

Regards,
Daniel

[1] Devices which provide function level reset or PM reset capabilities,
as bus level reset is too painful to deal with, requiring co-assignment
of all devices on the same bus to the same guest.
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 05:43:14PM +, Daniel P. Berrange wrote:
 On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
   That sounds like something worth exploring at least, I didn't know
   about that kernel build option until now :-) It sounds like it ought
   to be enough to let us test the NUMA topology handling, CPU pinning
   and probably huge pages too.
  
  Okay. I've been vaguely referring to this as a potential test vector,
  but only just now looked up the details. That's my bad :)
  
   The main gap I'd see is NUMA aware PCI device assignment since the
   PCI - NUMA node mapping data comes from the BIOS and it does not
   look like this is fakeable as is.
  
  Yeah, although I'd expect that the data is parsed and returned by a
  library or utility that may be a hook for fakeification. However, it may
  very well be more trouble than it's worth.
  
  I still feel like we should be able to test generic PCI in a similar way
  (passing something like a USB controller through to the guest, etc).
  However, I'm willing to believe that the intersection of PCI and NUMA is
  a higher order complication :)
 
 Oh I forgot to mention with PCI device assignment (as well as having a
 bunch of PCI devices available[1]), the key requirement is an IOMMU.
 AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
 out of luck for even basic PCI assignment testing inside VMs.

Ok, turns out that wasn't entirely accurate in general.

KVM *can* emulate an IOMMU, but it requires that the guest be booted
with the q35 machine type, instead of the ancient PIIX4 machine type,
and also QEMU must be launched with -machine iommu=on. We can't do
this in Nova, so although it is theoretically possible, it is not
doable for us in reality :-(

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-12 Thread Russell Bryant
On 11/12/2014 02:11 PM, Steve Gordon wrote:
 NUMA
 
 
 We still need to identify some hardware to run third party CI for the
 NUMA-related work, and no doubt other things that will come up. It's
 expected that this will be an interim solution until OPNFV resources
 can be used (note cdub jokingly replied 1-2 years when asked for a
 rough estimate - I mention this because based on a later discussion
 some people took this as a serious estimate).
 
 Ian did you have any luck kicking this off? Russell and I are also
 endeavouring to see what we can do on our side w.r.t. this short term
 approach - in particular if you find hardware we still need to find
 an owner to actually setup and manage it as discussed.
 
 In theory to get started we need a physical multi-socket box and a
 virtual machine somewhere on the same network to handle job control
 etc. I believe the tests themselves can be run in VMs (just not those
 exposed by existing public clouds) assuming a recent Libvirt and an
 appropriately crafted Libvirt XML that ensures the VM gets a
 multi-socket topology etc. (we can assist with this).

I just wanted to clarify the hardware requirement.  A minimal setup for
a first step can be just a single physical multi-socket machine.  We can
run a VM on that machine for control and create ephemeral VMs with numa
exposed to them for running the tests.

Note that in addition to setting up and maintaining the infrastructure,
we also need someone to write test cases.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev