[Xen-devel] [ovmf baseline-only test] 74883: all pass

2018-06-17 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 74883 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/74883/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf dde2dd64f07041c2ccc23dc7a5a846e667b7bb1a
baseline version:
 ovmf a05a8a5aa17da4bc7144706a9931d68beec1a61f

Last test of basis74855  2018-06-11 16:23:52 Z6 days
Testing same since74883  2018-06-17 19:50:52 Z0 days1 attempts


People who touched revisions under test:
  Ard Biesheuvel 
  Benjamin You 
  cinnamon shia 
  Dandan Bi 
  Derek Lin 
  Dongao Guo 
  Gerd Hoffmann 
  Hao Wu 
  Jaben Carsey 
  Kinney, Michael D 
  Laszlo Ersek 
  Liming Gao 
  Michael D Kinney 
  Michael Zimmermann 
  Nickle Wang 
  Ruiyu Ni 
  Udit Kumar 
  Yonghong Zhu 
  Yunhua Feng 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.

(No revision log; it would be 745 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline baseline-only test] 74882: tolerable FAIL

2018-06-17 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 74882 qemu-mainline real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/74882/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail like 74879
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail like 74879
 test-armhf-armhf-libvirt-xsm 12 guest-start  fail   like 74879
 test-armhf-armhf-xl-multivcpu 12 guest-start  fail  like 74879
 test-armhf-armhf-xl-xsm  12 guest-start  fail   like 74879
 test-armhf-armhf-libvirt 12 guest-start  fail   like 74879
 test-armhf-armhf-xl  12 guest-start  fail   like 74879
 test-armhf-armhf-xl-rtds 12 guest-start  fail   like 74879
 test-armhf-armhf-xl-midway   12 guest-start  fail   like 74879
 test-armhf-armhf-xl-credit2  12 guest-start  fail   like 74879
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1 fail like 74879
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 74879
 test-amd64-amd64-xl-pvshim   12 guest-start  fail   like 74879
 test-armhf-armhf-xl-vhd  10 debian-di-installfail   like 74879
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 74879
 test-armhf-armhf-libvirt-raw 10 debian-di-installfail   like 74879
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 qemuu2ef2f16781af9dee6ba6517755e9073ba5799fa2
baseline version:
 qemuu409c241f887a38bb7a2ac12e34d3a8d73922a9a5

Last test of basis74879  2018-06-16 05:16:32 Z1 days
Testing same since74882  2018-06-17 18:19:25 Z0 days1 attempts


People who touched revisions under test:
  Alex Bennée 
  Alistair Francis 
  Balamuruhan S 
  Brijesh Singh 
  Cédric Le Goater 
  Daniel P. Berrangé 
  Dr. David Alan Gilbert 
  Edgar E. Iglesias 
  Eric Blake 
  Greg Kurz 
  Jan Kiszka 
  Jason Wang 
  Joel Stanley 
  John Snow 
  Julia Suvorova 
  Kevin Wolf 
  Laurent Vivier 
  Lin Ma 
  linzhecheng 
  Markus Armbruster 
  Max Reitz 
  Peter Maydell 
  Richard Henderson 
  Shannon Zhao 
  Thomas Huth 
  Vladimir Sementsov-Ogievskiy 
  Xiao Guangrong 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  fail
 test-amd64-i386-xl   pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-xl-xsm  pass
 test-armhf-armhf-xl-xsm 

Re: [Xen-devel] [PATCH RFC 13/15] xen/arm: Allow vpl011 to be used by DomU

2018-06-17 Thread Julien Grall

Hi Stefano,

On 06/13/2018 11:15 PM, Stefano Stabellini wrote:

Make vpl011 being able to be used without a userspace component in Dom0.
In that case, output is printed to the Xen serial and input is received
from the Xen serial one character at a time.

Call domain_vpl011_init during construct_domU.

Signed-off-by: Stefano Stabellini 
---
  xen/arch/arm/domain_build.c  |  9 +++-
  xen/arch/arm/vpl011.c| 98 +---
  xen/include/asm-arm/vpl011.h |  2 +
  3 files changed, 84 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index ff65057..97f14ca 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2482,7 +2482,14 @@ int __init construct_domU(struct domain *d, struct 
dt_device_node *node)
  if ( rc < 0 )
  return rc;
  
-return __construct_domain(d, );

+rc = __construct_domain(d, );
+if ( rc < 0 )
+return rc;
+
+#ifdef CONFIG_SBSA_VUART_CONSOLE
+rc = domain_vpl011_init(d, NULL);


See my remark on the previous patch about exposing vpl011 by default.


+#endif
+return rc;
  }
  
  int __init construct_dom0(struct domain *d)

diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index a281eab..5f1dc7a 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -34,6 +34,8 @@
  #include 
  #include 
  
+static void vpl011_data_avail(struct domain *d);

+
  /*
   * Since pl011 registers are 32-bit registers, all registers
   * are handled similarly allowing 8-bit, 16-bit and 32-bit
@@ -77,6 +79,29 @@ static void vpl011_update_interrupt_status(struct domain *d)
  #endif
  }
  
+void vpl011_read_char(struct domain *d, char c)


The name is slightly odd. From the name, I would expect that a character 
is returned. But in fact, you write a character you received in the 
ring. So a better name would be vpl011_rx_char.



+{
+unsigned long flags;
+XENCONS_RING_IDX in_cons, in_prod;
+struct xencons_interface *intf = d->arch.vpl011.ring_buf;
+
+VPL011_LOCK(d, flags);
+
+in_cons = intf->in_cons;
+in_prod = intf->in_prod;
+if (xencons_queued(in_prod, in_cons, sizeof(intf->in)) == sizeof(intf->in))
+{
+VPL011_UNLOCK(d, flags);
+return;
+}
+
+intf->in[xencons_mask(in_prod, sizeof(intf->in))] = c;
+intf->in_prod = in_prod + 1;
+
+VPL011_UNLOCK(d, flags);
+vpl011_data_avail(d);
+}
+
  static uint8_t vpl011_read_data(struct domain *d)
  {
  unsigned long flags;
@@ -166,9 +191,18 @@ static void vpl011_write_data(struct domain *d, uint8_t 
data)
  struct vpl011 *vpl011 = >arch.vpl011;
  struct xencons_interface *intf = vpl011->ring_buf;
  XENCONS_RING_IDX out_cons, out_prod;
+unsigned int fifo_level = 0;
  
  VPL011_LOCK(d, flags);
  
+if ( vpl011->ring_page == NULL )

+{
+printk("%c", data);
+if (data == '\n')
+printk("DOM%u: ", d->domain_id);
+goto done;
+}
+


I would rather introduce separate function to read/write data for the 
case without PV console. And use it where appropriate. This would make 
the code slightly easier to understand because "ring_page == NULL" is 
slightly untuitive.


An idea would be introduce callback and set them during the 
initialization of the vpl011 for the domain.



  out_cons = intf->out_cons;
  out_prod = intf->out_prod;
  
@@ -184,13 +218,10 @@ static void vpl011_write_data(struct domain *d, uint8_t data)

  if ( xencons_queued(out_prod, out_cons, sizeof(intf->out)) !=
   sizeof (intf->out) )
  {
-unsigned int fifo_level;
-
  intf->out[xencons_mask(out_prod, sizeof(intf->out))] = data;
  out_prod += 1;
  smp_wmb();
  intf->out_prod = out_prod;
-


Spurious change.


  fifo_level = xencons_queued(out_prod, out_cons, sizeof(intf->out));
  
  if ( fifo_level == sizeof(intf->out) )

@@ -205,14 +236,15 @@ static void vpl011_write_data(struct domain *d, uint8_t 
data)
   */
  vpl011->uartfr |= BUSY;
  }
-
-vpl011_update_tx_fifo_status(vpl011, fifo_level);
-
-vpl011_update_interrupt_status(d);
  }
  else
  gprintk(XENLOG_ERR, "vpl011: Unexpected OUT ring buffer full\n");
  
+done:

+vpl011_update_tx_fifo_status(vpl011, fifo_level);
+
+vpl011_update_interrupt_status(d);


Hmmm, now you will call vpl011_update_* in the error case when writing 
to the case. If you want to keep that, this should at least be explained 
in the commit message or probably be a separate patch.



+
  vpl011->uartfr &= ~TXFE;
  
  VPL011_UNLOCK(d, flags);

@@ -462,13 +494,30 @@ int domain_vpl011_init(struct domain *d, struct 
vpl011_init_info *info)
  if ( vpl011->ring_buf )
  return -EINVAL;
  
-/* Map the guest PFN to Xen address space. */

-rc =  prepare_ring_for_helper(d,
-  gfn_x(info->gfn),
- 

Re: [Xen-devel] [PATCH RFC 02/15] xen/arm: move a few guest related #defines to public/arch-arm.h

2018-06-17 Thread Julien Grall

Hi Stefano,

On 06/14/2018 10:15 PM, Stefano Stabellini wrote:

On Thu, 14 Jun 2018, Julien Grall wrote:

On 13/06/18 23:15, Stefano Stabellini wrote:

Move a few constants defined by libxl_arm.c to
xen/include/public/arch-arm.h, so that they are together with the other
guest related #defines such as GUEST_GICD_BASE and GUEST_VPL011_SPI.
Also, this way they can be reused by hypervisor code.


All variables moved to arch-arm.h should be prefixed with GUEST_* to avoid
clash with the rest of Xen.


I'll do.



Signed-off-by: Stefano Stabellini 
CC: wei.l...@citrix.com
CC: ian.jack...@eu.citrix.com
---
   tools/libxl/libxl_arm.c   | 26 --
   xen/include/public/arch-arm.h | 26 ++
   2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 8af9f6f..89a417f 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -8,23 +8,6 @@
   #include 
   #include 
   -/**
- * IRQ line type.
- * DT_IRQ_TYPE_NONE- default, unspecified type
- * DT_IRQ_TYPE_EDGE_RISING - rising edge triggered
- * DT_IRQ_TYPE_EDGE_FALLING- falling edge triggered
- * DT_IRQ_TYPE_EDGE_BOTH   - rising and falling edge triggered
- * DT_IRQ_TYPE_LEVEL_HIGH  - high level triggered
- * DT_IRQ_TYPE_LEVEL_LOW   - low level triggered
- */
-#define DT_IRQ_TYPE_NONE   0x
-#define DT_IRQ_TYPE_EDGE_RISING0x0001
-#define DT_IRQ_TYPE_EDGE_FALLING   0x0002
-#define DT_IRQ_TYPE_EDGE_BOTH   \
-(DT_IRQ_TYPE_EDGE_FALLING | DT_IRQ_TYPE_EDGE_RISING)
-#define DT_IRQ_TYPE_LEVEL_HIGH 0x0004
-#define DT_IRQ_TYPE_LEVEL_LOW  0x0008
-


Those defines have nothing to do with the guest itself. They are currently
define in Xen without the DT_ prefix.


Sounds like we want to get rid of the DT_IRQ_TYPE_* definitions
completely, move the IRQ_TYPE_* definitions from device_tree.h to here,
and start using them in tools/libxl/libxl_arm.c (which involves a
renaming s/DT_IRQ_TYPE/IRQ_TYPE/g).

Is that what you had in mind?


Even if DT is Arm only today, the DT code is in common code and 
therefore header device_tree.h should contain every thing necessary to 
use a DT.


If we still want to share constant with libxl then I would prefer to 
introduce a new header (similar to acpi/acconfig.h) that provide all the 
common values.


Note that the hypervisor one don't have the DT_ prefix because they are 
use to describe IRQ for both DT and ACPI in Xen. It is not that nice, we 
might want to introduce aliases in that case. So we keep DT_* in libxl.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC 14/15] xen/arm: call construct_domU from start_xen and start DomU VMs

2018-06-17 Thread Julien Grall



On 06/13/2018 11:15 PM, Stefano Stabellini wrote:

Introduce support for the "xen,domU" compatible node on device tree.
Create new DomU VMs based on the information found on device tree under
"xen,domU".


While I like the idea of having multiple domain created by Xen, I think 
there are still few open questions here:
	1) The domains will be listed via "xl list". So are they still 
manageable via DOMCTL?

2) Is it possible to restart those domains?
	3) If a domain crash, what will happen? Are they just going to 	sit 
there using resources until the platform rebooted?
	4) How do you handle scheduling? Is it still possible to do it via 
Dom0? How about the dom0less situation?




Introduce a simple global variable named max_init_domid to keep track of
the initial allocated domids.


What is the exact goal of this new variable?



Move the discard_initial_modules after DomUs have been built

Signed-off-by: Stefano Stabellini 
---
  xen/arch/arm/domain_build.c |  2 --
  xen/arch/arm/setup.c| 35 ++-
  xen/include/asm-arm/setup.h |  2 ++
  xen/include/asm-x86/setup.h |  2 ++


You need to CC x86 maintainers for this change.


  4 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 97f14ca..e2d370f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2545,8 +2545,6 @@ int __init construct_dom0(struct domain *d)
  if ( rc < 0 )
  return rc;
  
-discard_initial_modules();

-


Please mention this move in the commit message.


  return __construct_domain(d, );
  }
  
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c

index 98bdb24..3723704 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -63,6 +63,8 @@ static unsigned long opt_xenheap_megabytes __initdata;
  integer_param("xenheap_megabytes", opt_xenheap_megabytes);
  #endif
  
+domid_t __read_mostly max_init_domid = 0;

+
  static __used void init_done(void)
  {
  free_init_memory();
@@ -711,6 +713,8 @@ void __init start_xen(unsigned long boot_phys_offset,
  struct bootmodule *xen_bootmodule;
  struct domain *dom0;
  struct xen_domctl_createdomain dom0_cfg = {};
+struct dt_device_node *chosen;
+struct dt_device_node *node;
  
  dcache_line_bytes = read_dcache_line_bytes();
  
@@ -860,7 +864,7 @@ void __init start_xen(unsigned long boot_phys_offset,

  dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
  dom0_cfg.arch.nr_spis = gic_number_lines() - 32;
  
-dom0 = domain_create(0, _cfg);

+dom0 = domain_create(max_init_domid++, _cfg);
  if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
  panic("Error creating domain 0");
  
@@ -886,6 +890,35 @@ void __init start_xen(unsigned long boot_phys_offset,
  
  domain_unpause_by_systemcontroller(dom0);
  
+chosen = dt_find_node_by_name(dt_host, "chosen");

+if ( chosen != NULL )
+{
+dt_for_each_child_node(chosen, node)
+{
+struct domain *d;
+struct xen_domctl_createdomain d_cfg = {};


There are quite a few field in xen_domctl_createdomain that we may want 
to allow the user setting them. I am thinking of ssidref for XSM. How is 
this going to be done?



+
+if ( !dt_device_is_compatible(node, "xen,domU") )
+continue;
+
+d_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;


Any reason to impose using the native GIC here?


+d_cfg.arch.nr_spis = gic_number_lines() - 32;


That's a bit unfortunate. So you are imposing to use 1020 IRQs and the 
waste of memory associated when only 32 SPIs is enough at the moment.



+
+d = domain_create(max_init_domid++, _cfg);
+if ( IS_ERR(d))


Coding style ( ... )


+panic("Error creating domU");
+
+d->is_privileged = 0;
+d->target = NULL;


Why do you set them? They are zeroed by default.


+
+if ( construct_domU(d, node) != 0)


Coding style ( ... )


+printk("Could not set up DOMU guest OS");
+
+domain_unpause_by_systemcontroller(d);
+}
+}


Please introduce a new function, this would avoid to increate 
start_xen() too much.




+discard_initial_modules();
+
  /* Switch on to the dynamically allocated stack for the idle vcpu
   * since the static one we're running on is about to be freed. */
  memcpy(idle_vcpu[0]->arch.cpu_info, get_cpu_info(),
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index e9f9905..578f3b9 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -56,6 +56,8 @@ struct bootinfo {
  
  extern struct bootinfo bootinfo;
  
+extern domid_t max_init_domid;

+
  void arch_init_memory(void);
  
  void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len);

diff --git a/xen/include/asm-x86/setup.h 

Re: [Xen-devel] [PATCH RFC 09/15] xen/arm: refactor construct_dom0

2018-06-17 Thread Julien Grall

Hi Stefano,

On 06/15/2018 12:35 AM, Stefano Stabellini wrote:

On Thu, 14 Jun 2018, Julien Grall wrote:

On 13/06/18 23:15, Stefano Stabellini wrote:

-
-printk("*** LOADING DOMAIN 0 ***\n");
-if ( dom0_mem <= 0 )
-{
-warning_add("PLEASE SPECIFY dom0_mem PARAMETER - USING 512M FOR
NOW\n");
-dom0_mem = MB(512);
-}
-
-
-iommu_hwdom_init(d);
-
-d->max_pages = ~0U;
-
-kinfo.unassigned_mem = dom0_mem;
-kinfo.d = d;
-
-rc = kernel_probe();
-if ( rc < 0 )
-return rc;
-
   #ifdef CONFIG_ARM_64
   /* if aarch32 mode is not supported at EL1 do not allow 32-bit domain
*/
-if ( !(cpu_has_el1_32) && kinfo.type == DOMAIN_32BIT )
+if ( !(cpu_has_el1_32) && kinfo->type == DOMAIN_32BIT )
   {
   printk("Platform does not support 32-bit domain\n");
   return -EINVAL;
   }
-d->arch.type = kinfo.type;


Any reason to move this out?


Yeah, initially I left it there but it didn't work. It needs to be set
before calling allocate_memory() for domUs otherwise memory allocations
fail.


Oh because allocate_domain(d) rely on is_domain_32bit, right? I don't 
much like the duplication here just because of prepare_dtb_domU. I am 
wondering if we could do:


if ( !is_hardware_domain(d) )
  prepare_dtb_domU(...);
else if ( acpi_disabled )
  prepare_acpi_hwdom(...);
else
  prepare_dt_hwdom();


+if ( acpi_disabled )
+rc = prepare_dtb(d, );
+else
+rc = prepare_acpi(d, );
+
+if ( rc < 0 )
+return rc;
+
+discard_initial_modules();


You say "no functional change" in this patch. But this is one. The module are
now discard much earlier. This imply that memory baking the Image/Initrd will
be free to be re-used at any time.

I don't think this is what we want. Unless you can promise no memory is
allocated in __construct_domain().


discard_initial_modules() will be moved later by patch #14, but I think
it makes sense to call discard_initial_modules() after
__construct_domain() here.


Yeah, I noticed you moved the discard_initial_modules() later on. But I 
would like to have the series bisectable if possible :).


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC 12/15] xen/arm: generate vpl011 node on device tree for domU

2018-06-17 Thread Julien Grall

Hi Stefano,

On 06/13/2018 11:15 PM, Stefano Stabellini wrote:

Introduce vpl011 support to guests started from Xen: it provides a
simple way to print output from a guest, as most guests come with a
pl011 driver. It is also able to provide a working console with
interrupt support.

Signed-off-by: Stefano Stabellini 
---
  xen/arch/arm/domain_build.c | 70 +
  1 file changed, 70 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index b4f560f..ff65057 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1470,6 +1470,70 @@ static int make_timer_domU_node(const struct domain *d, 
void *fdt)
  return res;
  }
  
+static void set_interrupt(gic_interrupt_t *interrupt, unsigned int irq,


The definition of interrupt looks suspicious. gic_interrupt_t is defined 
as be32[3]. Here you pass a pointer, so interrupt type would be __be32 
**, that you crudely cast to __be32* below.


Most likely you don't want to pass a pointer here and just use the type 
gic_interrupt_t. Because it is an array, then there will be no issue.



+  unsigned int cpumask, unsigned int level)
+{
+__be32 *cells = (__be32 *) interrupt;


Explicit cast are always a bad idea. If you need one, then mostly likely 
you did something wrong :). In that case interrupt type is __be32** and 
you cast to __be32*. If you change the type as suggested above, then the 
cast will not be necessary here.



+int is_ppi = (irq < 32);
+
+irq -= (is_ppi) ? 16: 32; /* PPIs start at 16, SPIs at 32 */
+
+/* See linux Documentation/devictree/bindings/arm/gic.txt */
+dt_set_cell(, 1, is_ppi); /* is a PPI? */
+dt_set_cell(, 1, irq);
+dt_set_cell(, 1, (cpumask << 8) | level);
+}


We already have a function to generate PPI interrupt 
(set_interrupt_ppi). Would it be possible to extend it to support interrupt?


Most likely, you will want to use set_interrupt(...) everywhere and just 
drop set_interrupt_ppi.



+
+#ifdef CONFIG_SBSA_VUART_CONSOLE
+static int make_vpl011_uart_node(const struct domain *d, void *fdt,
+ int addrcells, int sizecells)
+{
+int res;
+gic_interrupt_t intr;
+int reg_size = addrcells + sizecells;
+int nr_cells = reg_size;
+__be32 reg[nr_cells];
+__be32 *cells;
+
+res = fdt_begin_node(fdt, "sbsa-pl011");
+if (res)


Coding style:

if ( ... )


+return res;
+
+res = fdt_property_string(fdt, "compatible", "arm,sbsa-uart");


To make clear, you are exposing a SBSA compatible UART and not a PL011. 
SBSA UART is a subset of PL011 r1p5. A full PL011 implementation in Xen 
would just be too difficult, so your guest may require some changes in 
their driver.


I think this is a small price to pay, but I wanted to make sure you 
don't expect the guest to drive the UART the same way a PL011.



+if (res)


Coding style


+return res;
+
+cells = [0];
+dt_child_set_range(, addrcells, sizecells, GUEST_PL011_BASE,
+GUEST_PL011_SIZE);


The indentation looks wrong here.


+if (res)


Coding style


+return res;
+res = fdt_property(fdt, "reg", reg, sizeof(reg));
+if (res)


Coding style


+return res;
+
+set_interrupt(, GUEST_VPL011_SPI, 0xf, DT_IRQ_TYPE_LEVEL_HIGH);
+
+res = fdt_property(fdt, "interrupts", intr, sizeof (intr));
+if (res)


Coding style


+return res;
+
+res = fdt_property_cell(fdt, "interrupt-parent",
+PHANDLE_GIC);
+if (res)


Coding style


+return res;
+
+/* Use a default baud rate of 115200. */
+fdt_property_u32(fdt, "current-speed", 115200);
+
+res = fdt_end_node(fdt);
+if (res)


Coding style


+return res;
+
+return 0;
+}
+#endif
+
  #define DOMU_DTB_SIZE 4096
  static int prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
  {
@@ -1531,6 +1595,12 @@ static int prepare_dtb_domU(struct domain *d, struct 
kernel_info *kinfo)
  if ( ret )
  goto err;
  
+#ifdef CONFIG_SBSA_VUART_CONSOLE.

+ret = make_vpl011_uart_node(d, kinfo->fdt, addrcells, sizecells);


I would prefer if don't expose the pl011 by default to a guest and 
provide a way to enable it for a given guest



+if ( ret )
+goto err;
+#endif
+
  ret = fdt_end_node(kinfo->fdt);
  if ( ret < 0 )
  goto err;



Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-4.9-testing test] 124248: regressions - FAIL

2018-06-17 Thread osstest service owner
flight 124248 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124248/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
124043
 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail REGR. vs. 124043

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 123939
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 124009
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 124009
 test-amd64-i386-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail like 124009
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail like 
124043
 test-amd64-amd64-xl-qemuu-ws16-amd64 14 guest-localmigratefail like 124043
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 124043
 test-amd64-i386-xl-qemut-ws16-amd64 18 guest-start/win.repeat fail like 124043
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 124043
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  238007d6fae9447bf5e8e73d67ae9fb844e7ff2a
baseline version:
 xen  1c6b8f23b9c5099cdf9a530e0d044b1ab5a83511

Last test of basis   124043  2018-06-10 12:26:39 Z7 days
Failing since124180  2018-06-13 21:06:21 Z4 days3 attempts
Testing same since   124248  2018-06-16 15:07:22 Z1 days1 attempts


People who 

[Xen-devel] [linux-next test] 124209: trouble: broken/fail/pass

2018-06-17 Thread osstest service owner
flight 124209 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124209/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-rtds broken
 test-armhf-armhf-xl  broken
 test-armhf-armhf-xl   4 host-install(4)broken REGR. vs. 124151

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds  4 host-install(4)broken REGR. vs. 124151

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail blocked in 
124151
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 124151
 test-armhf-armhf-examine  8 reboot   fail  like 124151
 test-armhf-armhf-xl-xsm   7 xen-boot fail  like 124151
 test-armhf-armhf-xl-vhd   7 xen-boot fail  like 124151
 test-armhf-armhf-libvirt  7 xen-boot fail  like 124151
 test-armhf-armhf-xl-multivcpu  7 xen-boot fail like 124151
 test-armhf-armhf-xl-cubietruck  7 xen-bootfail like 124151
 test-armhf-armhf-xl-credit2   7 xen-boot fail  like 124151
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 124151
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 124151
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 124151
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 124151
 test-armhf-armhf-libvirt-raw  7 xen-boot fail  like 124151
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 124151
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 124151
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linux4b373f94fee5acf2ff4c1efbb3f702060379df1f
baseline version:
 linux19785cf93b6c4252981894394f2dbd35c5e5d1ec

Last test of basis  (not found) 
Failing since   (not found) 
Testing same since   124209  2018-06-15 09:19:01 Z2 days1 attempts

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  

[Xen-devel] [linux-4.14 test] 124233: regressions - FAIL

2018-06-17 Thread osstest service owner
flight 124233 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124233/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-xsm  6 xen-install  fail REGR. vs. 124110

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxcda6fd4d9382205bb792255cd56a91062d404bc0
baseline version:
 linux70d7bbd9b504c1dde0dc44a469a513695d9cbdd6

Last test of basis   124110  2018-06-12 13:07:52 Z5 days
Testing same since   124233  2018-06-16 08:12:06 Z1 days1 attempts


People who touched revisions under test:
  Alan Stern 
  Alexander Kappner 
  Bart Van Assche 
  Bin Liu 
  Dave Martin 
  Dmitry Torokhov 
  Ethan Lee 
  Fabio Estevam 
  Felipe Balbi 
  Felix Wilhelm 
  Florian Westphal 
  Geert Uytterhoeven 
  Gil Kupfer 
  Greg Kroah-Hartman 
  Gustavo A. R. Silva 
  Herbert Xu 
  Horia Geantă 
  Jan Glauber 

[Xen-devel] [ovmf test] 124243: all pass - PUSHED

2018-06-17 Thread osstest service owner
flight 124243 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124243/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf dde2dd64f07041c2ccc23dc7a5a846e667b7bb1a
baseline version:
 ovmf a05a8a5aa17da4bc7144706a9931d68beec1a61f

Last test of basis   124058  2018-06-11 03:10:30 Z6 days
Failing since124074  2018-06-11 16:41:40 Z6 days7 attempts
Testing same since   124243  2018-06-16 13:00:38 Z1 days1 attempts


People who touched revisions under test:
  Ard Biesheuvel 
  Benjamin You 
  cinnamon shia 
  Dandan Bi 
  Derek Lin 
  Dongao Guo 
  Gerd Hoffmann 
  Hao Wu 
  Jaben Carsey 
  Kinney, Michael D 
  Laszlo Ersek 
  Liming Gao 
  Michael D Kinney 
  Michael Zimmermann 
  Nickle Wang 
  Ruiyu Ni 
  Udit Kumar 
  Yonghong Zhu 
  Yunhua Feng 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a05a8a5aa1..dde2dd64f0  dde2dd64f07041c2ccc23dc7a5a846e667b7bb1a -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 124232: tolerable FAIL - PUSHED

2018-06-17 Thread osstest service owner
flight 124232 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124232/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 124199
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 124199
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 124199
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 124199
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 124199
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 124199
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 qemuu2ef2f16781af9dee6ba6517755e9073ba5799fa2
baseline version:
 qemuu409c241f887a38bb7a2ac12e34d3a8d73922a9a5

Last test of basis   124199  2018-06-14 21:20:15 Z2 days
Testing same since   124232  2018-06-16 05:04:26 Z1 days1 attempts


People who touched revisions under test:
  Alex Bennée 
  Alistair Francis 
  Balamuruhan S 
  Brijesh Singh 
  Cédric Le Goater 
  Daniel P. Berrangé 
  Dr. David Alan Gilbert 
  Edgar E. Iglesias 
  Eric Blake 
  Greg Kurz 
  Jan Kiszka 
  Jason Wang 
  Joel Stanley 
  John Snow 
  Julia Suvorova 
  Kevin Wolf 
  Laurent Vivier 
  Lin Ma 
  linzhecheng 
  Markus Armbruster 
  Max Reitz 
  Peter Maydell 
  Richard Henderson 
  Shannon Zhao 
  Thomas Huth 
  Vladimir Sementsov-Ogievskiy 
  Xiao Guangrong 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64 

[Xen-devel] [libvirt test] 124239: regressions - FAIL

2018-06-17 Thread osstest service owner
flight 124239 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124239/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 123814
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 123814
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 123814
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 123814

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  2b43314d8c85fec9d319a16657dd207d4c451aa3
baseline version:
 libvirt  076a2b409667dd9f716a2a2085e1ffea9d58fe8b

Last test of basis   123814  2018-06-05 04:19:23 Z   12 days
Failing since123840  2018-06-06 04:19:28 Z   11 days   11 attempts
Testing same since   124239  2018-06-16 09:58:57 Z1 days1 attempts


People who touched revisions under test:
  Andrea Bolognani 
  Anya Harter 
  Brijesh Singh 
  Chen Hanxiao 
  Christian Ehrhardt 
  Cole Robinson 
  Daniel Nicoletti 
  Daniel P. Berrangé 
  Erik Skultety 
  Fabiano Fidêncio 
  Filip Alac 
  intrigeri 
  intrigeri 
  Jamie Strandboge 
  Jiri Denemark 
  John Ferlan 
  Julio Faracco 
  Ján Tomko 
  Katerina Koukiou 
  Laszlo Ersek 
  Marc Hartmayer 
  Martin Kletzander 
  Michal Privoznik 
  Pavel Hrdina 
  Peter Krempa 
  Radostin Stoyanov 
  Ramy Elkest 
  ramyelkest 
  Roman Bogorodskiy 
  Stefan Bader 
  Stefan Berger 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-libvirt-xsm blocked 
 test-arm64-arm64-libvirt-xsm blocked 
 test-armhf-armhf-libvirt-xsm blocked 
 test-amd64-i386-libvirt-xsm  blocked 
 test-amd64-amd64-libvirt blocked 
 test-arm64-arm64-libvirt blocked 
 test-armhf-armhf-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-libvirt-pairblocked 
 test-amd64-i386-libvirt-pair blocked 
 test-arm64-arm64-libvirt-qcow2   blocked 
 test-armhf-armhf-libvirt-raw blocked 
 test-amd64-amd64-libvirt-vhd blocked 



[Xen-devel] [PATCH RFC v2 21/23] xen/mem_paging: move paging op arguments into a union

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

In preparation for the addition of a mem paging op with different
arguments than the existing ops, move the op-specific arguments into a
union.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_mem_paging.c  |  8 
 xen/arch/x86/mm/mem_paging.c |  6 +++---
 xen/include/public/memory.h  | 12 
 3 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 28611f4..f314b08 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -29,10 +29,10 @@ static int xc_mem_paging_memop(xc_interface *xch, domid_t 
domain_id,
 
 memset(, 0, sizeof(mpo));
 
-mpo.op  = op;
-mpo.domain  = domain_id;
-mpo.gfn = gfn;
-mpo.buffer  = (unsigned long) buffer;
+mpo.op   = op;
+mpo.domain   = domain_id;
+mpo.u.single.gfn = gfn;
+mpo.u.single.buffer  = (unsigned long) buffer;
 
 return do_memory_op(xch, XENMEM_paging_op, , sizeof(mpo));
 }
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index a049e0d..e23e26c 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -49,15 +49,15 @@ int 
mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
 switch( mpo.op )
 {
 case XENMEM_paging_op_nominate:
-rc = p2m_mem_paging_nominate(d, mpo.gfn);
+rc = p2m_mem_paging_nominate(d, mpo.u.single.gfn);
 break;
 
 case XENMEM_paging_op_evict:
-rc = p2m_mem_paging_evict(d, mpo.gfn);
+rc = p2m_mem_paging_evict(d, mpo.u.single.gfn);
 break;
 
 case XENMEM_paging_op_prep:
-rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
+rc = p2m_mem_paging_prep(d, mpo.u.single.gfn, mpo.u.single.buffer);
 if ( !rc )
 copyback = 1;
 break;
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 6eee0c8..49ef162 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -394,10 +394,14 @@ struct xen_mem_paging_op {
 uint8_t op; /* XENMEM_paging_op_* */
 domid_t domain;
 
-/* PAGING_PREP IN: buffer to immediately fill page in */
-uint64_aligned_tbuffer;
-/* Other OPs */
-uint64_aligned_tgfn;   /* IN:  gfn of page being operated on */
+union {
+struct {
+/* PAGING_PREP IN: buffer to immediately fill page in */
+uint64_aligned_tbuffer;
+/* Other OPs */
+uint64_aligned_tgfn;   /* IN:  gfn of page being operated on */
+} single;
+} u;
 };
 typedef struct xen_mem_paging_op xen_mem_paging_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH RFC v2 06/23] libxc/xc_sr_restore: factor helpers out of handle_page_data()

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

When processing a PAGE_DATA record, the restore code:
1) applies a number of sanity checks on the record's headers and size
2) decodes the list of packed page info into pfns and their types
3) using the pfn and type info, populates and fills the pages into the
   guest using process_page_data()

Steps 1) and 2) are also useful for other types of pages records
introduced by postcopy live migration, so factor them into reusable
helper routines.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_sr_common.c  | 36 ++
 tools/libxc/xc_sr_common.h  | 10 +
 tools/libxc/xc_sr_restore.c | 89 ++---
 3 files changed, 97 insertions(+), 38 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index 08abe9a..f443974 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -140,6 +140,42 @@ int read_record(struct xc_sr_context *ctx, int fd, struct 
xc_sr_record *rec)
 return 0;
 };
 
+int validate_pages_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
+  uint32_t expected_type)
+{
+xc_interface *xch = ctx->xch;
+struct xc_sr_rec_pages_header *pages = rec->data;
+
+if ( rec->type != expected_type )
+{
+ERROR("%s record type expected, instead received record of type "
+  "%08x (%s)", rec_type_to_str(expected_type), rec->type,
+  rec_type_to_str(rec->type));
+return -1;
+}
+else if ( rec->length < sizeof(*pages) )
+{
+ERROR("%s record truncated: length %u, min %zu",
+  rec_type_to_str(rec->type), rec->length, sizeof(*pages));
+return -1;
+}
+else if ( pages->count < 1 )
+{
+ERROR("Expected at least 1 pfn in %s record",
+  rec_type_to_str(rec->type));
+return -1;
+}
+else if ( rec->length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) 
)
+{
+ERROR("%s record (length %u) too short to contain %u"
+  " pfns worth of information", rec_type_to_str(rec->type),
+  rec->length, pages->count);
+return -1;
+}
+
+return 0;
+}
+
 static void __attribute__((unused)) build_assertions(void)
 {
 BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 2f33ccc..b1aa88e 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -392,6 +392,16 @@ static inline int write_record(struct xc_sr_context *ctx, 
int fd,
 int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 
 /*
+ * Given a record of one of the page data types, validate it by:
+ * - checking its actual type against its specific expected type
+ * - sanity checking its actual length against its claimed length
+ *
+ * Returns 0 on success and non-0 on failure.
+ */
+int validate_pages_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
+  uint32_t expected_type);
+
+/*
  * This would ideally be private in restore.c, but is needed by
  * x86_pv_localise_page() if we receive pagetables frames ahead of the
  * contents of the frames they point at.
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index fc47a25..00fad7d 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -326,45 +326,21 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 }
 
 /*
- * Validate a PAGE_DATA record from the stream, and pass the results to
- * process_page_data() to actually perform the legwork.
+ * Given a PAGE_DATA record, decode each packed entry into its encoded pfn and
+ * type, storing the results in the pfns and types buffers.
+ *
+ * Returns the number of pages of real data, or < 0 on error.
  */
-static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record 
*rec)
+static int decode_pages_record(struct xc_sr_context *ctx,
+   struct xc_sr_rec_pages_header *pages,
+   /* OUT */ xen_pfn_t *pfns,
+   /* OUT */ uint32_t *types)
 {
 xc_interface *xch = ctx->xch;
-struct xc_sr_rec_pages_header *pages = rec->data;
-unsigned i, pages_of_data = 0;
-int rc = -1;
-
-xen_pfn_t *pfns = NULL, pfn;
-uint32_t *types = NULL, type;
-
-if ( rec->length < sizeof(*pages) )
-{
-ERROR("PAGE_DATA record truncated: length %u, min %zu",
-  rec->length, sizeof(*pages));
-goto err;
-}
-else if ( pages->count < 1 )
-{
-ERROR("Expected at least 1 pfn in PAGE_DATA record");
-goto err;
-}
-else if ( rec->length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) 
)
-{
-ERROR("PAGE_DATA record (length %u) too short to contain %u"
-  " pfns worth of information", rec->length, pages->count);
-goto err;
-}
-
-pfns = 

[Xen-devel] [PATCH RFC v2 02/23] libxc/xc_sr: parameterise write_record() on fd

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

Right now, write_split_record() - which is delegated to by
write_record() - implicitly writes to ctx->fd.  This means it can't be
used with the restore context's send_back_fd, which is inconvenient.

Add an 'fd' parameter to both write_record() and write_split_record(),
and mechanically update all existing callsites to pass ctx->fd for it.

No functional change.

Signed-off-by: Joshua Otto 
Acked-by: Wei Liu 
Reviewed-by: Andrew Cooper 
---
 tools/libxc/xc_sr_common.c   |  6 +++---
 tools/libxc/xc_sr_common.h   |  8 
 tools/libxc/xc_sr_common_x86.c   |  2 +-
 tools/libxc/xc_sr_save.c |  6 +++---
 tools/libxc/xc_sr_save_x86_hvm.c |  5 +++--
 tools/libxc/xc_sr_save_x86_pv.c  | 17 +
 6 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index 48fa676..c1babf6 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -52,8 +52,8 @@ const char *rec_type_to_str(uint32_t type)
 return "Reserved";
 }
 
-int write_split_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
-   void *buf, size_t sz)
+int write_split_record(struct xc_sr_context *ctx, int fd,
+   struct xc_sr_record *rec, void *buf, size_t sz)
 {
 static const char zeroes[(1u << REC_ALIGN_ORDER) - 1] = { 0 };
 
@@ -81,7 +81,7 @@ int write_split_record(struct xc_sr_context *ctx, struct 
xc_sr_record *rec,
 if ( sz )
 assert(buf);
 
-if ( writev_exact(ctx->fd, parts, ARRAY_SIZE(parts)) )
+if ( writev_exact(fd, parts, ARRAY_SIZE(parts)) )
 goto err;
 
 return 0;
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index a83f22a..2f33ccc 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -361,8 +361,8 @@ struct xc_sr_record
  *
  * Returns 0 on success and non0 on failure.
  */
-int write_split_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
-   void *buf, size_t sz);
+int write_split_record(struct xc_sr_context *ctx, int fd,
+   struct xc_sr_record *rec, void *buf, size_t sz);
 
 /*
  * Writes a record to the stream, applying correct padding where appropriate.
@@ -371,10 +371,10 @@ int write_split_record(struct xc_sr_context *ctx, struct 
xc_sr_record *rec,
  *
  * Returns 0 on success and non0 on failure.
  */
-static inline int write_record(struct xc_sr_context *ctx,
+static inline int write_record(struct xc_sr_context *ctx, int fd,
struct xc_sr_record *rec)
 {
-return write_split_record(ctx, rec, NULL, 0);
+return write_split_record(ctx, fd, rec, NULL, 0);
 }
 
 /*
diff --git a/tools/libxc/xc_sr_common_x86.c b/tools/libxc/xc_sr_common_x86.c
index 98f1cef..7b3dd50 100644
--- a/tools/libxc/xc_sr_common_x86.c
+++ b/tools/libxc/xc_sr_common_x86.c
@@ -18,7 +18,7 @@ int write_tsc_info(struct xc_sr_context *ctx)
 return -1;
 }
 
-return write_record(ctx, );
+return write_record(ctx, ctx->fd, );
 }
 
 int handle_tsc_info(struct xc_sr_context *ctx, struct xc_sr_record *rec)
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index 3837bc1..8aba0d8 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -53,7 +53,7 @@ static int write_end_record(struct xc_sr_context *ctx)
 {
 struct xc_sr_record end = { REC_TYPE_END, 0, NULL };
 
-return write_record(ctx, );
+return write_record(ctx, ctx->fd, );
 }
 
 /*
@@ -63,7 +63,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 {
 struct xc_sr_record checkpoint = { REC_TYPE_CHECKPOINT, 0, NULL };
 
-return write_record(ctx, );
+return write_record(ctx, ctx->fd, );
 }
 
 /*
@@ -646,7 +646,7 @@ static int verify_frames(struct xc_sr_context *ctx)
 
 DPRINTF("Enabling verify mode");
 
-rc = write_record(ctx, );
+rc = write_record(ctx, ctx->fd, );
 if ( rc )
 goto out;
 
diff --git a/tools/libxc/xc_sr_save_x86_hvm.c b/tools/libxc/xc_sr_save_x86_hvm.c
index fc5c6ea..54ddbfe 100644
--- a/tools/libxc/xc_sr_save_x86_hvm.c
+++ b/tools/libxc/xc_sr_save_x86_hvm.c
@@ -42,7 +42,7 @@ static int write_hvm_context(struct xc_sr_context *ctx)
 }
 
 hvm_rec.length = hvm_buf_size;
-rc = write_record(ctx, _rec);
+rc = write_record(ctx, ctx->fd, _rec);
 if ( rc < 0 )
 {
 PERROR("error write HVM_CONTEXT record");
@@ -116,7 +116,8 @@ static int write_hvm_params(struct xc_sr_context *ctx)
 if ( hdr.count == 0 )
 return 0;
 
-rc = write_split_record(ctx, , entries, hdr.count * sizeof(*entries));
+rc = write_split_record(ctx, ctx->fd, , entries,
+hdr.count * sizeof(*entries));
 if ( rc )
 PERROR("Failed to write HVM_PARAMS record");
 
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index 36b1058..5f9b97d 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ 

[Xen-devel] [PATCH RFC v2 07/23] libxc/migration: tidy the xc_domain_save()/restore() interface

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

Both xc_domain_save() and xc_domain_restore() have a high number of
parameters, including a number of boolean parameters that are split
between a bitfield flags argument and separate individual boolean
arguments.  Further, many of these arguments are dead/ignored.

Tidy the interface to these functions by collecting the parameters into
a structure assembled by the caller and passed by pointer, and drop the
dead parameters.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxc/include/xenguest.h   | 68 
 tools/libxc/xc_nomigrate.c   | 16 +++---
 tools/libxc/xc_sr_common.h   |  4 +--
 tools/libxc/xc_sr_restore.c  | 47 +--
 tools/libxc/xc_sr_save.c | 54 +++
 tools/libxl/libxl_dom_save.c | 11 ---
 tools/libxl/libxl_internal.h |  1 -
 tools/libxl/libxl_save_callout.c | 12 +++
 tools/libxl/libxl_save_helper.c  | 61 ---
 9 files changed, 122 insertions(+), 152 deletions(-)

diff --git a/tools/libxc/include/xenguest.h b/tools/libxc/include/xenguest.h
index aa8cc8b..d1f97b9 100644
--- a/tools/libxc/include/xenguest.h
+++ b/tools/libxc/include/xenguest.h
@@ -22,16 +22,9 @@
 #ifndef XENGUEST_H
 #define XENGUEST_H
 
-#define XC_NUMA_NO_NODE   (~0U)
-
-#define XCFLAGS_LIVE  (1 << 0)
-#define XCFLAGS_DEBUG (1 << 1)
-#define XCFLAGS_HVM   (1 << 2)
-#define XCFLAGS_STDVGA(1 << 3)
-#define XCFLAGS_CHECKPOINT_COMPRESS(1 << 4)
+#include 
 
-#define X86_64_B_SIZE   64 
-#define X86_32_B_SIZE   32
+#define XC_NUMA_NO_NODE   (~0U)
 
 /*
  * User not using xc_suspend_* / xc_await_suspent may not want to
@@ -90,20 +83,26 @@ typedef enum {
 XC_MIG_STREAM_COLO,
 } xc_migration_stream_t;
 
+struct domain_save_params {
+uint32_t dom;   /* the id of the domain */
+int save_fd;/* the fd to save the domain to */
+int recv_fd;/* the fd to receive live protocol responses */
+uint32_t max_iters; /* how many precopy iterations before we give up? */
+bool live;  /* is this a live migration? */
+bool debug; /* are we in debug mode? */
+xc_migration_stream_t stream_type; /* is there checkpointing involved? */
+};
+
 /**
  * This function will save a running domain.
  *
  * @parm xch a handle to an open hypervisor interface
- * @parm fd the file descriptor to save a domain to
- * @parm dom the id of the domain
- * @param stream_type XC_MIG_STREAM_NONE if the far end of the stream
- *doesn't use checkpointing
+ * @parm params a description of the requested save/migration
+ * @parm callbacks hooks for delegated steps of the save procedure
  * @return 0 on success, -1 on failure
  */
-int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t 
max_iters,
-   uint32_t max_factor, uint32_t flags /* XCFLAGS_xxx */,
-   struct save_callbacks* callbacks, int hvm,
-   xc_migration_stream_t stream_type, int recv_fd);
+int xc_domain_save(xc_interface *xch, const struct domain_save_params *params,
+   const struct save_callbacks *callbacks);
 
 /* callbacks provided by xc_domain_restore */
 struct restore_callbacks {
@@ -145,31 +144,32 @@ struct restore_callbacks {
 void* data;
 };
 
+struct domain_restore_params {
+uint32_t dom; /* the id of the domain */
+int recv_fd;  /* the fd to restore the domain from */
+int send_back_fd; /* the fd to send live protocol responses */
+unsigned int store_evtchn;/* the store event channel */
+xen_pfn_t *store_gfn; /* OUT - the gfn of the store page */
+domid_t store_domid;  /* the store domain id */
+unsigned int console_evtchn;  /* the console event channel */
+xen_pfn_t *console_gfn;   /* OUT - the gfn of the console page */
+domid_t console_domid;/* the console domain id */
+xc_migration_stream_t stream_type; /* is there checkpointing involved? */
+};
+
 /**
  * This function will restore a saved domain.
  *
  * Domain is restored in a suspended state ready to be unpaused.
  *
  * @parm xch a handle to an open hypervisor interface
- * @parm fd the file descriptor to restore a domain from
- * @parm dom the id of the domain
- * @parm store_evtchn the store event channel for this domain to use
- * @parm store_mfn returned with the mfn of the store page
- * @parm hvm non-zero if this is a HVM restore
- * @parm pae non-zero if this HVM domain has PAE support enabled
- * @parm superpages non-zero to allocate guest memory with superpages
- * @parm stream_type non-zero if the far end of the stream is using 
checkpointing
- * @parm callbacks non-NULL to receive a callback to restore toolstack
- *   specific data
+ * @parm params a description of the requested restore operation
+ * @parm callbacks hooks for delegated steps of the restore 

[Xen-devel] [PATCH RFC v2 20/23] tools: expose postcopy live migration support in libxl and xl

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

- Add a 'memory_strategy' parameter to libxl_domain_live_migrate(),
  which specifies how the remainder of the memory migration should be
  approached after the iterative precopy phase is completed.
- Plug this parameter into the libxl migration precopy policy
  implementation.
- Add --postcopy to xl migrate, and skip the xl-level handshaking at
  both sides when postcopy migration occurs.

Signed-off-by: Joshua Otto 
---
 tools/libxl/libxl.h  |  5 
 tools/libxl/libxl_dom_save.c | 17 
 tools/libxl/libxl_domain.c   |  8 --
 tools/libxl/libxl_internal.h |  1 +
 tools/xl/xl.h|  7 -
 tools/xl/xl_cmdtable.c   |  3 ++
 tools/xl/xl_migrate.c| 65 
 tools/xl/xl_vmcontrol.c  |  8 --
 8 files changed, 97 insertions(+), 17 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 70441cf..b569734 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1413,9 +1413,14 @@ int libxl_domain_live_migrate(libxl_ctx *ctx, uint32_t 
domid, int send_fd,
   int flags, /* LIBXL_SUSPEND_* */
   int recv_fd,
   bool *postcopy_transitioned, /* OUT */
+  int memory_strategy,
   const libxl_asyncop_how *ao_how)
   LIBXL_EXTERNAL_CALLERS_ONLY;
 
+#define LIBXL_LM_MEMORY_STOP_AND_COPY 0
+#define LIBXL_LM_MEMORY_POSTCOPY 1
+#define LIBXL_LM_MEMORY_DEFAULT LIBXL_LM_MEMORY_STOP_AND_COPY
+
 /* @param suspend_cancel [from xenctrl.h:xc_domain_resume( @param fast )]
  *   If this parameter is true, use co-operative resume. The guest
  *   must support this.
diff --git a/tools/libxl/libxl_dom_save.c b/tools/libxl/libxl_dom_save.c
index 75ab523..c54f728 100644
--- a/tools/libxl/libxl_dom_save.c
+++ b/tools/libxl/libxl_dom_save.c
@@ -338,14 +338,19 @@ int 
libxl__save_emulator_xenstore_data(libxl__domain_save_state *dss,
  * the live migration when there are either fewer than 50 dirty pages, or more
  * than 5 precopy rounds have completed.
  */
-static int libxl__save_live_migration_precopy_policy(
-struct precopy_stats stats, void *user)
+static int libxl__save_live_migration_precopy_policy(struct precopy_stats 
stats,
+ void *user)
 {
-if (stats.dirty_count >= 0 && stats.dirty_count < 50)
-return XGS_POLICY_STOP_AND_COPY;
+libxl__save_helper_state *shs = user;
+libxl__domain_save_state *dss = shs->caller_state;
 
-if (stats.iteration >= 5)
-return XGS_POLICY_STOP_AND_COPY;
+if ((stats.dirty_count >= 0 &&
+ stats.dirty_count <= 50) ||
+(stats.iteration >= 5)) {
+return (dss->memory_strategy == LIBXL_LM_MEMORY_POSTCOPY)
+? XGS_POLICY_POSTCOPY
+: XGS_POLICY_STOP_AND_COPY;
+}
 
 return XGS_POLICY_CONTINUE_PRECOPY;
 }
diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index fc37f47..e211b88 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -488,6 +488,7 @@ static void domain_suspend_cb(libxl__egc *egc,
 
 static int do_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
  int recv_fd, bool *postcopy_transitioned,
+ int memory_strategy,
  const libxl_asyncop_how *ao_how)
 {
 AO_CREATE(ctx, domid, ao_how);
@@ -509,6 +510,7 @@ static int do_domain_suspend(libxl_ctx *ctx, uint32_t 
domid, int fd, int flags,
 dss->fd = fd;
 dss->recv_fd = recv_fd;
 dss->postcopy_transitioned = postcopy_transitioned;
+dss->memory_strategy = memory_strategy;
 dss->type = type;
 dss->live = flags & LIBXL_SUSPEND_LIVE;
 dss->debug = flags & LIBXL_SUSPEND_DEBUG;
@@ -529,12 +531,14 @@ static int do_domain_suspend(libxl_ctx *ctx, uint32_t 
domid, int fd, int flags,
 int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
  const libxl_asyncop_how *ao_how)
 {
-return do_domain_suspend(ctx, domid, fd, flags, -1, NULL, ao_how);
+return do_domain_suspend(ctx, domid, fd, flags, -1, NULL,
+ LIBXL_LM_MEMORY_DEFAULT, ao_how);
 }
 
 int libxl_domain_live_migrate(libxl_ctx *ctx, uint32_t domid, int send_fd,
   int flags, int recv_fd,
   bool *postcopy_transitioned,
+  int memory_strategy,
   const libxl_asyncop_how *ao_how)
 {
 if (!postcopy_transitioned) {
@@ -545,7 +549,7 @@ int libxl_domain_live_migrate(libxl_ctx *ctx, uint32_t 
domid, int send_fd,
 flags |= LIBXL_SUSPEND_LIVE;
 
 return do_domain_suspend(ctx, domid, send_fd, flags, recv_fd,
- postcopy_transitioned, ao_how);
+ 

[Xen-devel] [PATCH RFC v2 03/23] libxc/xc_sr_restore.c: use write_record() in send_checkpoint_dirty_pfn_list()

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

Teach send_checkpoint_dirty_pfn_list() to use write_record()'s new fd
parameter, avoiding the need for a manual writev().

No functional change.

Signed-off-by: Joshua Otto 
Acked-by: Wei Liu 
Reviewed-by: Andrew Cooper 
---
 tools/libxc/xc_sr_restore.c | 27 ---
 1 file changed, 4 insertions(+), 23 deletions(-)

diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index ee06b3d..481a904 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -420,7 +420,6 @@ static int send_checkpoint_dirty_pfn_list(struct 
xc_sr_context *ctx)
 int rc = -1;
 unsigned count, written;
 uint64_t i, *pfns = NULL;
-struct iovec *iov = NULL;
 xc_shadow_op_stats_t stats = { 0, ctx->restore.p2m_size };
 struct xc_sr_record rec =
 {
@@ -467,35 +466,17 @@ static int send_checkpoint_dirty_pfn_list(struct 
xc_sr_context *ctx)
 pfns[written++] = i;
 }
 
-/* iovec[] for writev(). */
-iov = malloc(3 * sizeof(*iov));
-if ( !iov )
-{
-ERROR("Unable to allocate memory for sending dirty bitmap");
-goto err;
-}
-
+rec.data = pfns;
 rec.length = count * sizeof(*pfns);
 
-iov[0].iov_base = 
-iov[0].iov_len = sizeof(rec.type);
-
-iov[1].iov_base = 
-iov[1].iov_len = sizeof(rec.length);
-
-iov[2].iov_base = pfns;
-iov[2].iov_len = count * sizeof(*pfns);
-
-if ( writev_exact(ctx->restore.send_back_fd, iov, 3) )
-{
-PERROR("Failed to write dirty bitmap to stream");
+rc = write_record(ctx, ctx->restore.send_back_fd, );
+if ( rc )
 goto err;
-}
 
 rc = 0;
+
  err:
 free(pfns);
-free(iov);
 return rc;
 }
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH RFC v2 10/23] libxc/xc_sr_save: introduce save batch types

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

To write guest pages into the stream, the save logic builds up batches
of pfns to be written and performs all of the work necessary to write
them whenever a full batch has been accumulated.  Writing a PAGE_DATA
batch entails determining the types of all pfns in the batch, mapping
the subset of pfns that are backed by real memory constructing a
PAGE_DATA record describing the batch and writing everything into the
stream.

Postcopy live migration introduces several new types of batches.  To
enable the postcopy logic to re-use the bulk of the code used to manage
and write PAGE_DATA records, introduce a batch_type member to the save
context (which for now can take on only a single value), and refactor
write_batch() to take the batch_type into account when preparing and
writing each record.

While refactoring write_batch(), factor the operation of querying the
page types of a batch into a subroutine that is useable independently of
write_batch().

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_sr_common.h|   3 +
 tools/libxc/xc_sr_save.c  | 207 +++---
 tools/libxc/xg_save_restore.h |   2 +-
 3 files changed, 140 insertions(+), 72 deletions(-)

diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 0da0ffc..fc82e71 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -208,6 +208,9 @@ struct xc_sr_context
 struct precopy_stats stats;
 int policy_decision;
 
+enum {
+XC_SR_SAVE_BATCH_PRECOPY_PAGE
+} batch_type;
 xen_pfn_t *batch_pfns;
 unsigned nr_batch_pfns;
 unsigned long *deferred_pages;
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index 48d403b..9f077a3 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -3,6 +3,23 @@
 
 #include "xc_sr_common.h"
 
+#define MAX_BATCH_SIZE MAX_PRECOPY_BATCH_SIZE
+
+static const unsigned int batch_sizes[] =
+{
+[XC_SR_SAVE_BATCH_PRECOPY_PAGE]  = MAX_PRECOPY_BATCH_SIZE
+};
+
+static const bool batch_includes_contents[] =
+{
+[XC_SR_SAVE_BATCH_PRECOPY_PAGE] = true
+};
+
+static const uint32_t batch_rec_types[] =
+{
+[XC_SR_SAVE_BATCH_PRECOPY_PAGE]  = REC_TYPE_PAGE_DATA
+};
+
 /*
  * Writes an Image header and Domain header into the stream.
  */
@@ -67,19 +84,54 @@ static int write_checkpoint_record(struct xc_sr_context 
*ctx)
 }
 
 /*
+ * This function:
+ * - maps each pfn in the current batch to its gfn
+ * - gets the type of each pfn in the batch.
+ */
+static int get_batch_info(struct xc_sr_context *ctx, xen_pfn_t *gfns,
+  xen_pfn_t *types)
+{
+int rc;
+unsigned int nr_pfns = ctx->save.nr_batch_pfns;
+xc_interface *xch = ctx->xch;
+unsigned int i;
+
+for ( i = 0; i < nr_pfns; ++i )
+types[i] = gfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
+  ctx->save.batch_pfns[i]);
+
+/*
+ * The type query domctl accepts batches of at most 1024 pfns, so we need 
to
+ * break our batch here into appropriately-sized sub-batches.
+ */
+for ( i = 0; i < nr_pfns; i += 1024 )
+{
+rc = xc_get_pfn_type_batch(xch, ctx->domid, min(1024U, nr_pfns - i),
+   [i]);
+if ( rc )
+{
+PERROR("Failed to get types for pfn batch");
+return rc;
+}
+}
+
+return 0;
+}
+
+/*
  * Writes a batch of memory as a PAGE_DATA record into the stream.  The batch
  * is constructed in ctx->save.batch_pfns.
  *
  * This function:
- * - gets the types for each pfn in the batch.
  * - for each pfn with real data:
  *   - maps and attempts to localise the pages.
  * - construct and writes a PAGE_DATA record into the stream.
  */
-static int write_batch(struct xc_sr_context *ctx)
+static int write_batch(struct xc_sr_context *ctx, xen_pfn_t *gfns,
+   xen_pfn_t *types)
 {
 xc_interface *xch = ctx->xch;
-xen_pfn_t *gfns = NULL, *types = NULL;
+xen_pfn_t *bgfns = NULL;
 void *guest_mapping = NULL;
 void **guest_data = NULL;
 void **local_pages = NULL;
@@ -90,17 +142,16 @@ static int write_batch(struct xc_sr_context *ctx)
 uint64_t *rec_pfns = NULL;
 struct iovec *iov = NULL; int iovcnt = 0;
 struct xc_sr_rec_pages_header hdr = { 0 };
+bool send_page_contents = batch_includes_contents[ctx->save.batch_type];
 struct xc_sr_record rec =
 {
-.type = REC_TYPE_PAGE_DATA,
+.type = batch_rec_types[ctx->save.batch_type],
 };
 
 assert(nr_pfns != 0);
 
-/* Mfns of the batch pfns. */
-gfns = malloc(nr_pfns * sizeof(*gfns));
-/* Types of the batch pfns. */
-types = malloc(nr_pfns * sizeof(*types));
+/* The subset of gfns that are physically-backed. */
+bgfns = malloc(nr_pfns * sizeof(*bgfns));
 /* Errors from attempting to map the gfns. 

[Xen-devel] [PATCH RFC v2 04/23] libxc/xc_sr: naming correction: mfns -> gfns

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

In write_batch() on the migration save side and in process_page_data()
on the corresponding path on the restore side, a local variable named
'mfns' is used to refer to an array of what are actually gfns.  Rename
both to 'gfns' to address this.

No functional change.

Signed-off-by: Joshua Otto 
Suggested-by: Andrew Cooper 
---
 tools/libxc/xc_sr_restore.c | 16 
 tools/libxc/xc_sr_save.c| 20 ++--
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index 481a904..2f35f4d 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -203,7 +203,7 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
  xen_pfn_t *pfns, uint32_t *types, void *page_data)
 {
 xc_interface *xch = ctx->xch;
-xen_pfn_t *mfns = malloc(count * sizeof(*mfns));
+xen_pfn_t *gfns = malloc(count * sizeof(*gfns));
 int *map_errs = malloc(count * sizeof(*map_errs));
 int rc;
 void *mapping = NULL, *guest_page = NULL;
@@ -211,11 +211,11 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 j, /* j indexes the subset of pfns we decide to map. */
 nr_pages = 0;
 
-if ( !mfns || !map_errs )
+if ( !gfns || !map_errs )
 {
 rc = -1;
 ERROR("Failed to allocate %zu bytes to process page data",
-  count * (sizeof(*mfns) + sizeof(*map_errs)));
+  count * (sizeof(*gfns) + sizeof(*map_errs)));
 goto err;
 }
 
@@ -246,7 +246,7 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 case XEN_DOMCTL_PFINFO_L4TAB:
 case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
 
-mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
+gfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
 break;
 }
 }
@@ -257,11 +257,11 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 
 mapping = guest_page = xenforeignmemory_map(xch->fmem,
 ctx->domid, PROT_READ | PROT_WRITE,
-nr_pages, mfns, map_errs);
+nr_pages, gfns, map_errs);
 if ( !mapping )
 {
 rc = -1;
-PERROR("Unable to map %u mfns for %u pages of data",
+PERROR("Unable to map %u gfns for %u pages of data",
nr_pages, count);
 goto err;
 }
@@ -281,7 +281,7 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 {
 rc = -1;
 ERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") 
failed with %d",
-  pfns[i], mfns[j], types[i], map_errs[j]);
+  pfns[i], gfns[j], types[i], map_errs[j]);
 goto err;
 }
 
@@ -320,7 +320,7 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
 free(map_errs);
-free(mfns);
+free(gfns);
 
 return rc;
 }
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index 8aba0d8..e93d8fd 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -79,7 +79,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
 xc_interface *xch = ctx->xch;
-xen_pfn_t *mfns = NULL, *types = NULL;
+xen_pfn_t *gfns = NULL, *types = NULL;
 void *guest_mapping = NULL;
 void **guest_data = NULL;
 void **local_pages = NULL;
@@ -98,7 +98,7 @@ static int write_batch(struct xc_sr_context *ctx)
 assert(nr_pfns != 0);
 
 /* Mfns of the batch pfns. */
-mfns = malloc(nr_pfns * sizeof(*mfns));
+gfns = malloc(nr_pfns * sizeof(*gfns));
 /* Types of the batch pfns. */
 types = malloc(nr_pfns * sizeof(*types));
 /* Errors from attempting to map the gfns. */
@@ -110,7 +110,7 @@ static int write_batch(struct xc_sr_context *ctx)
 /* iovec[] for writev(). */
 iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-if ( !mfns || !types || !errors || !guest_data || !local_pages || !iov )
+if ( !gfns || !types || !errors || !guest_data || !local_pages || !iov )
 {
 ERROR("Unable to allocate arrays for a batch of %u pages",
   nr_pfns);
@@ -119,11 +119,11 @@ static int write_batch(struct xc_sr_context *ctx)
 
 for ( i = 0; i < nr_pfns; ++i )
 {
-types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
+types[i] = gfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
   ctx->save.batch_pfns[i]);
 
 /* Likely a ballooned page. */
-if ( mfns[i] == INVALID_MFN )
+if ( gfns[i] == INVALID_MFN )
 {
 set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
 ++ctx->save.nr_deferred_pages;
@@ -148,13 +148,13 @@ static int 

[Xen-devel] [PATCH RFC v2 12/23] libxc/migration: specify postcopy live migration

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

- allocate the new postcopy record type numbers
- augment the stream format specification to include these new types and
  their role in the protocol

Signed-off-by: Joshua Otto 
---
 docs/specs/libxc-migration-stream.pandoc | 175 +++
 tools/libxc/xc_sr_common.c   |   7 ++
 tools/libxc/xc_sr_stream_format.h|   9 +-
 3 files changed, 190 insertions(+), 1 deletion(-)

diff --git a/docs/specs/libxc-migration-stream.pandoc 
b/docs/specs/libxc-migration-stream.pandoc
index 8342d88..9f08615 100644
--- a/docs/specs/libxc-migration-stream.pandoc
+++ b/docs/specs/libxc-migration-stream.pandoc
@@ -3,6 +3,7 @@
   Andrew Cooper <>
   Wen Congyang <>
   Yang Hongyang <>
+  Joshua Otto <>
 % Revision 2
 
 Introduction
@@ -231,6 +232,20 @@ type 0x: END
 
  0x000F: CHECKPOINT_DIRTY_PFN_LIST (Secondary -> Primary)
 
+ 0x0010: POSTCOPY_BEGIN
+
+ 0x0011: POSTCOPY_PFNS_BEGIN
+
+ 0x0012: POSTCOPY_PFNS
+
+ 0x0013: POSTCOPY_TRANSITION
+
+ 0x0014: POSTCOPY_PAGE_DATA
+
+ 0x0015: POSTCOPY_FAULT
+
+ 0x0016: POSTCOPY_COMPLETE
+
  0x0010 - 0x7FFF: Reserved for future _mandatory_
  records.
 
@@ -624,6 +639,142 @@ The count of pfns is: record->length/sizeof(uint64_t).
 
 \clearpage
 
+POSTCOPY_BEGIN
+--
+
+This record must only appear in a truly _live_ migration stream, and is
+transmitted by the migration sender to signal to the destination that
+the migration will (as soon as possible) transition from the memory
+pre-copy phase to the post-copy phase, during which remaining unmigrated
+domain memory is paged over the network on-demand _after_ the guest has
+resumed.
+
+This record _must_ be followed immediately by the domain CPU context
+records (e.g. TSC_INFO, HVM_CONTEXT and HVM_PARAMS for HVM domains).
+This is for practical reasons: in the HVM case, the PAGING_RING_PFN
+parameter must be known at the destination before preparation for paging
+can begin.
+
+This record contains no fields; its body_length is 0.
+
+\clearpage
+
+POSTCOPY_PFNS_BEGIN
+---
+
+During the initiation sequence of a postcopy live migration, this record
+immediately follows the final domain CPU context record and indicates
+the beginning of a sequence of 0 or more POSTCOPY_PFNS records.  The
+destination uses this record as a cue to prepare for postcopy paging.
+
+This record contains no fields; its body_length is 0.
+
+\clearpage
+
+POSTCOPY_PFNS
+-
+
+Each POSTCOPY_PFNS record contains an unordered list of 'postcopy PFNS'
+- i.e. pfns that are dirty at the sender and require migration during
+the postcopy phase.  The structure of the record is identical that of
+the PAGE_DATA record type, but omitting any actual trailing page
+contents.
+
+ 0 1 2 3 4 5 6 7 octet
++---+-+
+| count (C) | (reserved)  |
++---+-+
+| pfn[0]  |
++-+
+...
++-+
+| pfn[C-1]|
++-+
+
+\clearpage
+
+POSTCOPY_TRANSITION
+---
+
+This record is transmitted by a postcopy live migration sender after the
+final POSTCOPY_PFNS record, and indicates that the embedded libxc stream
+will be interrupted by content in the higher-layer stream necessary to
+permit resumption of the domain at the destination, and further than
+when the higher-layer content is complete the domain should be resumed
+in postcopy mode at the destination.
+
+This record contains no fields; its body_length is 0.
+
+\clearpage
+
+POSTCOPY_PAGE_DATA
+--
+
+This record is identical in meaning and format to the PAGE_DATA record
+type, and is transmitted during live migration by the sender during the
+postcopy phase to transfer batches of outstanding domain memory.
+
+ 0 1 2 3 4 5 6 7 octet
++---+-+
+| count (C) | (reserved)  |
++---+-+
+| pfn[0]  |
++-+
+...
++-+
+| pfn[C-1]|
++-+
+| page_data[0]... |
+...
++-+
+| page_data[C-1]...   |
+...
++-+
+

[Xen-devel] [PATCH RFC v2 08/23] libxc/migration: defer precopy policy to a callback

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

The precopy phase of the xc_domain_save() live migration algorithm has
historically been implemented to run until either a) (almost) no pages
are dirty or b) some fixed, hard-coded maximum number of precopy
iterations has been exceeded.  This policy and its implementation are
less than ideal for a few reasons:
- the logic of the policy is intertwined with the control flow of the
  mechanism of the precopy stage
- it can't take into account facts external to the immediate
  migration context, such as interactive user input or the passage of
  wall-clock time

To permit users to implement arbitrary higher-level policies governing
when the live migration precopy phase should end, and what should be
done next:
- add a precopy_policy() callback to the xc_domain_save() user-supplied
  callbacks
- during the precopy phase of live migrations, consult this policy after
  each batch of pages transmitted and take the dictated action, which
  may be to a) abort the migration entirely, b) continue with the
  precopy, or c) proceed to the stop-and-copy phase.

For now a simple callback implementing the old policy is hard-coded in
place (to be replaced in a subsequent patch).

Signed-off-by: Joshua Otto 
---
 tools/libxc/include/xenguest.h   |  20 +++-
 tools/libxc/xc_sr_common.h   |  12 ++-
 tools/libxc/xc_sr_save.c | 193 ---
 tools/libxl/libxl_save_callout.c |   2 +-
 tools/libxl/libxl_save_helper.c  |   1 -
 5 files changed, 170 insertions(+), 58 deletions(-)

diff --git a/tools/libxc/include/xenguest.h b/tools/libxc/include/xenguest.h
index d1f97b9..215abd0 100644
--- a/tools/libxc/include/xenguest.h
+++ b/tools/libxc/include/xenguest.h
@@ -32,6 +32,14 @@
  */
 struct xenevtchn_handle;
 
+/* For save's precopy_policy(). */
+struct precopy_stats
+{
+unsigned int iteration;
+unsigned int total_written;
+int dirty_count; /* -1 if unknown */
+};
+
 /* callbacks provided by xc_domain_save */
 struct save_callbacks {
 /* Called after expiration of checkpoint interval,
@@ -39,6 +47,17 @@ struct save_callbacks {
  */
 int (*suspend)(void* data);
 
+/* Called after every batch of page data sent during the precopy phase of a
+ * live migration to ask the caller what to do next based on the current
+ * state of the precopy migration.
+ */
+#define XGS_POLICY_ABORT  (-1) /* Abandon the migration entirely and
+* tidy up. */
+#define XGS_POLICY_CONTINUE_PRECOPY 0  /* Remain in the precopy phase. */
+#define XGS_POLICY_STOP_AND_COPY1  /* Immediately suspend and transmit the
+* remaining dirty pages. */
+int (*precopy_policy)(struct precopy_stats stats, void *data);
+
 /* Called after the guest's dirty pages have been
  *  copied into an output buffer.
  * Callback function resumes the guest & the device model,
@@ -87,7 +106,6 @@ struct domain_save_params {
 uint32_t dom;   /* the id of the domain */
 int save_fd;/* the fd to save the domain to */
 int recv_fd;/* the fd to receive live protocol responses */
-uint32_t max_iters; /* how many precopy iterations before we give up? */
 bool live;  /* is this a live migration? */
 bool debug; /* are we in debug mode? */
 xc_migration_stream_t stream_type; /* is there checkpointing involved? */
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index f192654..0da0ffc 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -198,12 +198,16 @@ struct xc_sr_context
 /* Further debugging information in the stream. */
 bool debug;
 
-/* Parameters for tweaking live migration. */
-unsigned max_iterations;
-unsigned dirty_threshold;
-
 unsigned long p2m_size;
 
+enum {
+XC_SAVE_PHASE_PRECOPY,
+XC_SAVE_PHASE_STOP_AND_COPY
+} phase;
+
+struct precopy_stats stats;
+int policy_decision;
+
 xen_pfn_t *batch_pfns;
 unsigned nr_batch_pfns;
 unsigned long *deferred_pages;
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index 0ab86c3..55b77ff 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -277,13 +277,29 @@ static int write_batch(struct xc_sr_context *ctx)
 }
 
 /*
+ * Test if the batch is full.
+ */
+static bool batch_full(const struct xc_sr_context *ctx)
+{
+return ctx->save.nr_batch_pfns == MAX_BATCH_SIZE;
+}
+
+/*
+ * Test if the batch is empty.
+ */
+static bool batch_empty(struct xc_sr_context *ctx)
+{
+return ctx->save.nr_batch_pfns == 0;
+}
+
+/*
  * Flush a batch of pfns into the stream.
  */
 static int flush_batch(struct xc_sr_context *ctx)
 {
 int rc = 0;
 
-if ( ctx->save.nr_batch_pfns == 0 )
+if ( batch_empty(ctx) )
 return 

[Xen-devel] [PATCH RFC v2 22/23] xen/mem_paging: add a populate_evicted paging op

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

The paging API presently permits only individual, populated pages to be
evicted, and even then only after a previous nomination op on the
candidate page.  This works well at steady-state, but is somewhat
awkward and inefficient for pagers attempting to implement startup
demand-paging for guests: in this case it is necessary to populate all
of the holes in the physmap to be demand-paged, only to then nominate
and immediately evict each page one-by-one.

To permit more efficient startup demand-paging, introduce a new
populate_evicted paging op.  Given a batch of gfns, it:
- marks gfns corresponding to phymap holes as paged-out directly
- frees the backing frames of previously-populated gfns, and then marks
  them as paged-out directly (skipping the nomination step)

The latter behaviour is needed to fully support postcopy live migration:
a page may be populated only to have its contents subsequently
invalidated by a write at the sender, requiring it to ultimately be
demand-paged anyway.

I measured a reduction in time required to evict a batch of 512k
previously-unpopulated pfns from 8.535s to 1.590s (~5.4x speedup).

Note: as a long-running batching memory op, populate_evicted takes
advantage of the existing pre-emption/continuation hack (encoding the
starting offset into the batch in bits [:6] of the op argument).  To
make this work, plumb the cmd argument all the way down through
do_memory_op() -> arch_memory_op() -> subarch_memory_op() ->
mem_paging_memop(), fixing up each switch statement along the way to
use only the MEMOP_CMD bits.

Signed-off-by: Joshua Otto 
---
 tools/libxc/include/xenctrl.h|   2 +
 tools/libxc/xc_mem_paging.c  |  31 
 xen/arch/x86/mm.c|   5 +-
 xen/arch/x86/mm/mem_paging.c |  34 -
 xen/arch/x86/mm/p2m.c| 101 +++
 xen/arch/x86/x86_64/compat/mm.c  |   6 ++-
 xen/arch/x86/x86_64/mm.c |   6 ++-
 xen/include/asm-x86/mem_paging.h |   3 +-
 xen/include/asm-x86/p2m.h|   2 +
 xen/include/public/memory.h  |  13 +++--
 10 files changed, 190 insertions(+), 13 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 1629f41..22992b9 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1945,6 +1945,8 @@ int xc_mem_paging_resume(xc_interface *xch, domid_t 
domain_id);
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id,
uint64_t gfn);
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, uint64_t gfn);
+int xc_mem_paging_populate_evicted(xc_interface *xch, domid_t domain_id,
+   xen_pfn_t *gfns, uint32_t nr);
 int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, uint64_t gfn);
 int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
uint64_t gfn, void *buffer);
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index f314b08..b0416b6 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -116,6 +116,37 @@ int xc_mem_paging_load(xc_interface *xch, domid_t 
domain_id,
 return rc;
 }
 
+int xc_mem_paging_populate_evicted(xc_interface *xch,
+   domid_t domain_id,
+   xen_pfn_t *gfns,
+   uint32_t nr)
+{
+DECLARE_HYPERCALL_BOUNCE(gfns, nr * sizeof(*gfns),
+ XC_HYPERCALL_BUFFER_BOUNCE_IN);
+int rc;
+
+xen_mem_paging_op_t mpo =
+{
+.op   = XENMEM_paging_op_populate_evicted,
+.domain   = domain_id,
+.u= { .batch = { .nr = nr } }
+};
+
+if ( xc_hypercall_bounce_pre(xch, gfns) )
+{
+PERROR("Could not bounce memory for 
XENMEM_paging_op_populate_evicted");
+return -1;
+}
+
+set_xen_guest_handle(mpo.u.batch.gfns, gfns);
+
+rc = do_memory_op(xch, XENMEM_paging_op, , sizeof(mpo));
+
+xc_hypercall_bounce_post(xch, gfns);
+
+return rc;
+}
+
 
 /*
  * Local variables:
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 77b0af1..bc41bde 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4955,9 +4955,10 @@ int xenmem_add_to_physmap_one(
 
 long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-int rc;
+long rc;
+int op = cmd & MEMOP_CMD_MASK;
 
-switch ( cmd )
+switch ( op )
 {
 case XENMEM_set_memory_map:
 {
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index e23e26c..8f62f58 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -21,12 +21,17 @@
 
 
 #include 
+#include 
 #include 
+#include 
 #include 
 
-int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
+long mem_paging_memop(unsigned long cmd,
+  XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
 {
-int rc;
+

[Xen-devel] [PATCH RFC v2 16/23] libxl/libxl_stream_write.c: track callback chains with an explicit phase

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

There are three callback chains through libxl_stream_write: the 'normal'
straight-through save path initiated by libxl__stream_write_start(), the
iterated checkpoint path initiated each time by
libxl__stream_write_start_checkpoint(), and the (short) back-channel
checkpoint path initiated by libxl__stream_write_checkpoint_state().
These paths share significant common code but handle failure and
completion slightly differently, so it is necessary to keep track of
the callback chain currently in progress and act accordingly at various
points.

Until now, a collection of booleans in the stream write state has been
used to indicate the current callback chain.  However, the set of
callback chains is really better described by an enum, since only one
callback chain can actually be active at one time.  In anticipation of
the addition of a new chain for postcopy live migration, refactor the
existing logic to use an enum rather than booleans for callback chain
tracking.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxl/libxl_internal.h |  7 ++-
 tools/libxl/libxl_stream_write.c | 96 ++--
 2 files changed, 48 insertions(+), 55 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 89de86b..cef2f39 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -3211,9 +3211,12 @@ struct libxl__stream_write_state {
 /* Private */
 int rc;
 bool running;
-bool in_checkpoint;
+enum {
+SWS_PHASE_NORMAL,
+SWS_PHASE_CHECKPOINT,
+SWS_PHASE_CHECKPOINT_STATE
+} phase;
 bool sync_teardown;  /* Only used to coordinate shutdown on error path. */
-bool in_checkpoint_state;
 libxl__save_helper_state shs;
 
 /* Main stream-writing data. */
diff --git a/tools/libxl/libxl_stream_write.c b/tools/libxl/libxl_stream_write.c
index c96a6a2..8f2a1c9 100644
--- a/tools/libxl/libxl_stream_write.c
+++ b/tools/libxl/libxl_stream_write.c
@@ -89,12 +89,9 @@ static void emulator_context_read_done(libxl__egc *egc,
int rc, int onwrite, int errnoval);
 static void emulator_context_record_done(libxl__egc *egc,
  libxl__stream_write_state *stream);
-static void write_end_record(libxl__egc *egc,
- libxl__stream_write_state *stream);
+static void write_phase_end_record(libxl__egc *egc,
+   libxl__stream_write_state *stream);
 
-/* Event chain unique to checkpointed streams. */
-static void write_checkpoint_end_record(libxl__egc *egc,
-libxl__stream_write_state *stream);
 static void checkpoint_end_record_done(libxl__egc *egc,
libxl__stream_write_state *stream);
 
@@ -213,7 +210,7 @@ void libxl__stream_write_init(libxl__stream_write_state 
*stream)
 
 stream->rc = 0;
 stream->running = false;
-stream->in_checkpoint = false;
+stream->phase = SWS_PHASE_NORMAL;
 stream->sync_teardown = false;
 FILLZERO(stream->dc);
 stream->record_done_callback = NULL;
@@ -294,9 +291,9 @@ void libxl__stream_write_start_checkpoint(libxl__egc *egc,
   libxl__stream_write_state *stream)
 {
 assert(stream->running);
-assert(!stream->in_checkpoint);
+assert(stream->phase == SWS_PHASE_NORMAL);
 assert(!stream->back_channel);
-stream->in_checkpoint = true;
+stream->phase = SWS_PHASE_CHECKPOINT;
 
 write_emulator_xenstore_record(egc, stream);
 }
@@ -431,12 +428,8 @@ static void emulator_xenstore_record_done(libxl__egc *egc,
 
 if (dss->type == LIBXL_DOMAIN_TYPE_HVM)
 write_emulator_context_record(egc, stream);
-else {
-if (stream->in_checkpoint)
-write_checkpoint_end_record(egc, stream);
-else
-write_end_record(egc, stream);
-}
+else
+write_phase_end_record(egc, stream);
 }
 
 static void write_emulator_context_record(libxl__egc *egc,
@@ -534,34 +527,35 @@ static void emulator_context_record_done(libxl__egc *egc,
 free(stream->emu_body);
 stream->emu_body = NULL;
 
-if (stream->in_checkpoint)
-write_checkpoint_end_record(egc, stream);
-else
-write_end_record(egc, stream);
+write_phase_end_record(egc, stream);
 }
 
-static void write_end_record(libxl__egc *egc,
- libxl__stream_write_state *stream)
+static void write_phase_end_record(libxl__egc *egc,
+   libxl__stream_write_state *stream)
 {
 struct libxl__sr_rec_hdr rec;
+sws_record_done_cb cb;
+const char *what;
 
 FILLZERO(rec);
-rec.type = REC_TYPE_END;
-
-setup_write(egc, stream, "end record",
-, NULL, stream_success);
-}
-
-static void write_checkpoint_end_record(libxl__egc *egc,
-

[Xen-devel] [PATCH RFC v2 13/23] libxc/migration: add try_read_record()

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

Enable non-blocking migration record reads by adding a helper routine that
manages the context of a record read across multiple invocations as the record's
data becomes available over time.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_private.c   | 21 +++
 tools/libxc/xc_private.h   |  2 ++
 tools/libxc/xc_sr_common.c | 65 ++
 tools/libxc/xc_sr_common.h | 39 
 4 files changed, 122 insertions(+), 5 deletions(-)

diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index f395594..b33d02f 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -633,26 +633,37 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, 
int nbits)
 }
 }
 
-int read_exact(int fd, void *data, size_t size)
+int try_read_exact(int fd, void *data, size_t size, size_t *offset)
 {
-size_t offset = 0;
 ssize_t len;
 
-while ( offset < size )
+assert(offset);
+*offset = 0;
+while ( *offset < size )
 {
-len = read(fd, (char *)data + offset, size - offset);
+len = read(fd, (char *)data + *offset, size - *offset);
 if ( (len == -1) && (errno == EINTR) )
 continue;
 if ( len == 0 )
 errno = 0;
 if ( len <= 0 )
 return -1;
-offset += len;
+*offset += len;
 }
 
 return 0;
 }
 
+int read_exact(int fd, void *data, size_t size)
+{
+size_t offset;
+int rc;
+
+rc = try_read_exact(fd, data, size, );
+assert(rc == -1 || offset == size);
+return rc;
+}
+
 int write_exact(int fd, const void *data, size_t size)
 {
 size_t offset = 0;
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 1c27b0f..aaae344 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -384,6 +384,8 @@ int xc_flush_mmu_updates(xc_interface *xch, struct xc_mmu 
*mmu);
 
 /* Return 0 on success; -1 on error setting errno. */
 int read_exact(int fd, void *data, size_t size); /* EOF => -1, errno=0 */
+/* Like read_exact(), but stores the length read before error to *offset. */
+int try_read_exact(int fd, void *data, size_t size, size_t *offset);
 int write_exact(int fd, const void *data, size_t size);
 int writev_exact(int fd, const struct iovec *iov, int iovcnt);
 
diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index 090b5fd..c37fe1f 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -147,6 +147,71 @@ int read_record(struct xc_sr_context *ctx, int fd, struct 
xc_sr_record *rec)
 return 0;
 };
 
+int try_read_record(struct xc_sr_read_record_context *rrctx, int fd,
+struct xc_sr_record *rec)
+{
+int rc;
+xc_interface *xch = rrctx->ctx->xch;
+size_t offset_out, dataoff, datasz;
+
+/* If the header isn't yet complete, attempt to finish it first. */
+if ( rrctx->offset < sizeof(rrctx->rhdr) )
+{
+rc = try_read_exact(fd, (char *)>rhdr + rrctx->offset,
+sizeof(rrctx->rhdr) - rrctx->offset, _out);
+rrctx->offset += offset_out;
+
+if ( rc )
+return rc;
+}
+
+datasz = ROUNDUP(rrctx->rhdr.length, REC_ALIGN_ORDER);
+
+if ( datasz )
+{
+if ( !rrctx->data )
+{
+rrctx->data = malloc(datasz);
+
+if ( !rrctx->data )
+{
+ERROR("Unable to allocate %zu bytes for record (0x%08x, %s)",
+  datasz, rrctx->rhdr.type,
+  rec_type_to_str(rrctx->rhdr.type));
+return -1;
+}
+}
+
+dataoff = rrctx->offset - sizeof(rrctx->rhdr);
+rc = try_read_exact(fd, (char *)rrctx->data + dataoff, datasz - 
dataoff,
+_out);
+rrctx->offset += offset_out;
+
+if ( rc == -1 )
+{
+/* Differentiate between expected and fatal errors. */
+if ( (errno != EAGAIN) && (errno != EWOULDBLOCK) )
+{
+free(rrctx->data);
+rrctx->data = NULL;
+PERROR("Failed to read %zu bytes for record (0x%08x, %s)",
+   datasz, rrctx->rhdr.type,
+   rec_type_to_str(rrctx->rhdr.type));
+}
+
+return rc;
+}
+}
+
+/* Success!  Fill in the output record structure. */
+rec->type   = rrctx->rhdr.type;
+rec->length = rrctx->rhdr.length;
+rec->data   = rrctx->data;
+rrctx->data = NULL;
+
+return 0;
+}
+
 int validate_pages_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
   uint32_t expected_type)
 {
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index fc82e71..ce72e0d 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -399,6 +399,45 @@ static inline int write_record(struct xc_sr_context *ctx, 
int fd,
 int 

[Xen-devel] [PATCH RFC v2 05/23] libxc/xc_sr_restore: introduce generic 'pages' records

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

The PAGE_DATA migration record type is specified as an array of
uint64_ts encoding pfns and their types, followed by an array of page
contents.  Postcopy live migration specifies a number of records with
similar or the same format, and it would be convenient to be able to
re-use the code that validates and unpacks such records for each type.
To facilitate this, introduce the generic 'pages' name for such records
and rename the PAGE_DATA stream format struct and pfn encoding masks
accordingly.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_sr_common.c| 2 +-
 tools/libxc/xc_sr_restore.c   | 6 +++---
 tools/libxc/xc_sr_save.c  | 2 +-
 tools/libxc/xc_sr_stream_format.h | 6 +++---
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index c1babf6..08abe9a 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -146,7 +146,7 @@ static void __attribute__((unused)) build_assertions(void)
 BUILD_BUG_ON(sizeof(struct xc_sr_dhdr) != 16);
 BUILD_BUG_ON(sizeof(struct xc_sr_rhdr) != 8);
 
-BUILD_BUG_ON(sizeof(struct xc_sr_rec_page_data_header)  != 8);
+BUILD_BUG_ON(sizeof(struct xc_sr_rec_pages_header)  != 8);
 BUILD_BUG_ON(sizeof(struct xc_sr_rec_x86_pv_info)   != 8);
 BUILD_BUG_ON(sizeof(struct xc_sr_rec_x86_pv_p2m_frames) != 8);
 BUILD_BUG_ON(sizeof(struct xc_sr_rec_x86_pv_vcpu_hdr)   != 8);
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index 2f35f4d..fc47a25 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -332,7 +332,7 @@ static int process_page_data(struct xc_sr_context *ctx, 
unsigned count,
 static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record 
*rec)
 {
 xc_interface *xch = ctx->xch;
-struct xc_sr_rec_page_data_header *pages = rec->data;
+struct xc_sr_rec_pages_header *pages = rec->data;
 unsigned i, pages_of_data = 0;
 int rc = -1;
 
@@ -368,14 +368,14 @@ static int handle_page_data(struct xc_sr_context *ctx, 
struct xc_sr_record *rec)
 
 for ( i = 0; i < pages->count; ++i )
 {
-pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
+pfn = pages->pfn[i] & REC_PFINFO_PFN_MASK;
 if ( !ctx->restore.ops.pfn_is_valid(ctx, pfn) )
 {
 ERROR("pfn %#"PRIpfn" (index %u) outside domain maximum", pfn, i);
 goto err;
 }
 
-type = (pages->pfn[i] & PAGE_DATA_TYPE_MASK) >> 32;
+type = (pages->pfn[i] & REC_PFINFO_TYPE_MASK) >> 32;
 if ( ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) >= 5) &&
  ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) <= 8) )
 {
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index e93d8fd..b1a24b7 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -89,7 +89,7 @@ static int write_batch(struct xc_sr_context *ctx)
 void *page, *orig_page;
 uint64_t *rec_pfns = NULL;
 struct iovec *iov = NULL; int iovcnt = 0;
-struct xc_sr_rec_page_data_header hdr = { 0 };
+struct xc_sr_rec_pages_header hdr = { 0 };
 struct xc_sr_record rec =
 {
 .type = REC_TYPE_PAGE_DATA,
diff --git a/tools/libxc/xc_sr_stream_format.h 
b/tools/libxc/xc_sr_stream_format.h
index 3291b25..32400b2 100644
--- a/tools/libxc/xc_sr_stream_format.h
+++ b/tools/libxc/xc_sr_stream_format.h
@@ -80,15 +80,15 @@ struct xc_sr_rhdr
 #define REC_TYPE_OPTIONAL 0x8000U
 
 /* PAGE_DATA */
-struct xc_sr_rec_page_data_header
+struct xc_sr_rec_pages_header
 {
 uint32_t count;
 uint32_t _res1;
 uint64_t pfn[0];
 };
 
-#define PAGE_DATA_PFN_MASK  0x000fULL
-#define PAGE_DATA_TYPE_MASK 0xf000ULL
+#define REC_PFINFO_PFN_MASK  0x000fULL
+#define REC_PFINFO_TYPE_MASK 0xf000ULL
 
 /* X86_PV_INFO */
 struct xc_sr_rec_x86_pv_info
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH RFC v2 11/23] libxc/migration: correct hvm record ordering specification

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

The libxc migration stream specification document asserts that, within
an hvm migration stream, "HVM_PARAMS must precede HVM_CONTEXT, as
certain parameters can affect the validity of architectural state in the
context."  This sounds reasonable, but the in-tree implementation of hvm
domain save actually writes these records in the _reverse_ order, with
HVM_CONTEXT first and HVM_PARAMS next.  This has been the case for the
entire history of that implementation, seemingly to no ill effect, so
update the spec to reflect this.

Signed-off-by: Joshua Otto 
---
 docs/specs/libxc-migration-stream.pandoc | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/docs/specs/libxc-migration-stream.pandoc 
b/docs/specs/libxc-migration-stream.pandoc
index 73421ff..8342d88 100644
--- a/docs/specs/libxc-migration-stream.pandoc
+++ b/docs/specs/libxc-migration-stream.pandoc
@@ -673,11 +673,8 @@ A typical save record for an x86 HVM guest image would 
look like:
 2. Domain header
 3. Many PAGE\_DATA records
 4. TSC\_INFO
-5. HVM\_PARAMS
-6. HVM\_CONTEXT
-
-HVM\_PARAMS must precede HVM\_CONTEXT, as certain parameters can affect
-the validity of architectural state in the context.
+5. HVM\_CONTEXT
+6. HVM\_PARAMS
 
 
 Legacy Images (x86 only)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH RFC v2 17/23] libxl/libxl_stream_read.c: track callback chains with an explicit phase

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

As the previous patch did for libxl_stream_write, do for
libxl_stream_read.  libxl_stream_read already has a notion of phase for
its record-buffering behaviour - this is combined with the callback
chain phase.  Again, this is done to support the addition of a new
callback chain for postcopy live migration.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxl/libxl_internal.h|  7 ++--
 tools/libxl/libxl_stream_read.c | 83 +
 2 files changed, 45 insertions(+), 45 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cef2f39..30d5492 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -3133,9 +3133,7 @@ struct libxl__stream_read_state {
 /* Private */
 int rc;
 bool running;
-bool in_checkpoint;
 bool sync_teardown; /* Only used to coordinate shutdown on error path. */
-bool in_checkpoint_state;
 libxl__save_helper_state shs;
 libxl__conversion_helper_state chs;
 
@@ -3145,8 +3143,9 @@ struct libxl__stream_read_state {
 LIBXL_STAILQ_HEAD(, libxl__sr_record_buf) record_queue; /* NOGC */
 enum {
 SRS_PHASE_NORMAL,
-SRS_PHASE_BUFFERING,
-SRS_PHASE_UNBUFFERING,
+SRS_PHASE_CHECKPOINT_BUFFERING,
+SRS_PHASE_CHECKPOINT_UNBUFFERING,
+SRS_PHASE_CHECKPOINT_STATE
 } phase;
 bool recursion_guard;
 
diff --git a/tools/libxl/libxl_stream_read.c b/tools/libxl/libxl_stream_read.c
index 89c2f21..4cb553e 100644
--- a/tools/libxl/libxl_stream_read.c
+++ b/tools/libxl/libxl_stream_read.c
@@ -29,14 +29,15 @@
  * processed, and all records will be processed in queue order.
  *
  * Internal states:
- *   running  phase   in_ record   incoming
- *checkpoint  _queue   _record
+ *   running  phase   record   incoming
+ *_queue   _record
  *
- * Undefinedundef  undefundef   undefundef
- * Idle false  undeffalse   00
- * Active   true   NORMAL   false   0/1  0/partial
- * Active   true   BUFFERINGtrueany  0/partial
- * Active   true   UNBUFFERING  trueany  0
+ * Undefinedundef  undefundefundef
+ * Idle false  undef00
+ * Active   true   NORMAL   0/1  0/partial
+ * Active   true   CHECKPOINT_BUFFERING any  0/partial
+ * Active   true   CHECKPOINT_UNBUFFERING   any  0
+ * Active   true   CHECKPOINT_STATE 0/1  0/partial
  *
  * While reading data from the stream, 'dc' is active and a callback
  * is expected.  Most actions in process_record() start a callback of
@@ -48,12 +49,12 @@
  *   Records are read one at time and immediately processed.  (The
  *   record queue will not contain more than a single record.)
  *
- * PHASE_BUFFERING:
+ * PHASE_CHECKPOINT_BUFFERING:
  *   This phase is used in checkpointed streams, when libxc signals
  *   the presence of a checkpoint in the stream.  Records are read and
  *   buffered until a CHECKPOINT_END record has been read.
  *
- * PHASE_UNBUFFERING:
+ * PHASE_CHECKPOINT_UNBUFFERING:
  *   Once a CHECKPOINT_END record has been read, all buffered records
  *   are processed.
  *
@@ -172,6 +173,12 @@ static void checkpoint_state_done(libxl__egc *egc,
 
 /*- Helpers -*/
 
+static inline bool stream_in_checkpoint(libxl__stream_read_state *stream)
+{
+return stream->phase == SRS_PHASE_CHECKPOINT_BUFFERING ||
+   stream->phase == SRS_PHASE_CHECKPOINT_UNBUFFERING;
+}
+
 /* Helper to set up reading some data from the stream. */
 static int setup_read(libxl__stream_read_state *stream,
   const char *what, void *ptr, size_t nr_bytes,
@@ -210,7 +217,6 @@ void libxl__stream_read_init(libxl__stream_read_state 
*stream)
 
 stream->rc = 0;
 stream->running = false;
-stream->in_checkpoint = false;
 stream->sync_teardown = false;
 FILLZERO(stream->dc);
 FILLZERO(stream->hdr);
@@ -297,10 +303,9 @@ void libxl__stream_read_start_checkpoint(libxl__egc *egc,
  libxl__stream_read_state *stream)
 {
 assert(stream->running);
-assert(!stream->in_checkpoint);
+assert(stream->phase == SRS_PHASE_NORMAL);
 
-stream->in_checkpoint = true;
-stream->phase = SRS_PHASE_BUFFERING;
+stream->phase = SRS_PHASE_CHECKPOINT_BUFFERING;
 
 /*
  * Libxc has handed control of the fd to us.  Start reading some
@@ -392,6 +397,7 @@ static void stream_continue(libxl__egc *egc,
 
 switch (stream->phase) {
 case SRS_PHASE_NORMAL:
+case SRS_PHASE_CHECKPOINT_STATE:
 /*
  * Normal phase (regular migration or restore from file):
  *
@@ -416,9 +422,9 @@ static void stream_continue(libxl__egc *egc,
 }
   

[Xen-devel] [PATCH RFC v2 14/23] libxc/migration: implement the sender side of postcopy live migration

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

Add a new 'postcopy' phase to the live migration algorithm, during which
unmigrated domain memory is paged over the network on-demand _after_ the
guest has been resumed at the destination.

To do so:
- Add a new precopy policy option, XGS_POLICY_POSTCOPY, that policies
  can use to request a transition to the postcopy live migration phase
  rather than a stop-and-copy of the remaining dirty pages.
- Add support to xc_domain_save() for this policy option by breaking out
  of the precopy loop early, transmitting the final set of dirty pfns
  and all remaining domain state (including higher-layer state) except
  memory, and entering a postcopy loop during which the remaining page
  data is pushed in the background.  Remote requests for specific pages
  in response to faults in the domain are serviced with priority in this
  loop.

The new save callbacks required for this migration phase are stubbed in
libxl for now, to be replaced in a subsequent patch that adds libxl
support for this migration phase.  Support for this phase on the
migration receiver side follows immediately in the next patch.

Signed-off-by: Joshua Otto 
---
 tools/libxc/include/xenguest.h |  84 ---
 tools/libxc/xc_sr_common.h |   8 +-
 tools/libxc/xc_sr_save.c   | 488 ++---
 tools/libxc/xc_sr_save_x86_hvm.c   |  13 +
 tools/libxc/xg_save_restore.h  |  16 +-
 tools/libxl/libxl_dom_save.c   |  11 +-
 tools/libxl/libxl_save_msgs_gen.pl |   6 +-
 7 files changed, 558 insertions(+), 68 deletions(-)

diff --git a/tools/libxc/include/xenguest.h b/tools/libxc/include/xenguest.h
index 215abd0..a662273 100644
--- a/tools/libxc/include/xenguest.h
+++ b/tools/libxc/include/xenguest.h
@@ -56,41 +56,59 @@ struct save_callbacks {
 #define XGS_POLICY_CONTINUE_PRECOPY 0  /* Remain in the precopy phase. */
 #define XGS_POLICY_STOP_AND_COPY1  /* Immediately suspend and transmit the
 * remaining dirty pages. */
+#define XGS_POLICY_POSTCOPY 2  /* Suspend the guest and transition into
+* the postcopy phase of the migration. 
*/
 int (*precopy_policy)(struct precopy_stats stats, void *data);
 
-/* Called after the guest's dirty pages have been
- *  copied into an output buffer.
- * Callback function resumes the guest & the device model,
- *  returns to xc_domain_save.
- * xc_domain_save then flushes the output buffer, while the
- *  guest continues to run.
- */
-int (*aftercopy)(void* data);
-
-/* Called after the memory checkpoint has been flushed
- * out into the network. Typical actions performed in this
- * callback include:
- *   (a) send the saved device model state (for HVM guests),
- *   (b) wait for checkpoint ack
- *   (c) release the network output buffer pertaining to the acked 
checkpoint.
- *   (c) sleep for the checkpoint interval.
- *
- * returns:
- * 0: terminate checkpointing gracefully
- * 1: take another checkpoint */
-int (*checkpoint)(void* data);
-
-/*
- * Called after the checkpoint callback.
- *
- * returns:
- * 0: terminate checkpointing gracefully
- * 1: take another checkpoint
- */
-int (*wait_checkpoint)(void* data);
-
-/* Enable qemu-dm logging dirty pages to xen */
-int (*switch_qemu_logdirty)(int domid, unsigned enable, void *data); /* 
HVM only */
+/* Checkpointing and postcopy live migration are mutually exclusive. */
+union {
+struct {
+/*
+ * Called during a live migration's transition to the postcopy 
phase
+ * to yield control of the stream back to a higher layer so it can
+ * transmit records needed for resumption of the guest at the
+ * destination (e.g. device model state, xenstore context)
+ */
+int (*postcopy_transition)(void *data);
+};
+
+struct {
+/* Called after the guest's dirty pages have been
+ *  copied into an output buffer.
+ * Callback function resumes the guest & the device model,
+ *  returns to xc_domain_save.
+ * xc_domain_save then flushes the output buffer, while the
+ *  guest continues to run.
+ */
+int (*aftercopy)(void* data);
+
+/* Called after the memory checkpoint has been flushed
+ * out into the network. Typical actions performed in this
+ * callback include:
+ *   (a) send the saved device model state (for HVM guests),
+ *   (b) wait for checkpoint ack
+ *   (c) release the network output buffer pertaining to the acked
+ *   checkpoint.
+ *   (c) sleep for the checkpoint interval.
+ *
+ * returns:
+ * 0: terminate checkpointing gracefully
+ * 1: 

[Xen-devel] [PATCH RFC v2 18/23] libxl/migration: implement the sender side of postcopy live migration

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

To make the libxl sender capable of supporting postcopy live migration:
- Add a postcopy transition callback chain through the stream writer (this
  callback chain is nearly identical to the checkpoint callback chain, and
  differs meaningfully only in its failure/completion behaviour)
- Wire this callback chain up to the xc postcopy callback entries in the domain
  save logic.
- Introduce a new libxl API function, libxl_domain_live_migrate(),
  taking the same parameters as libxl_domain_suspend() as well as a
  recv_fd to enable bi-directional communication between the sender and
  receiver and a boolean out-parameter to enable the caller to reason
  about the safety of recovery from a postcopy failure. (the
  live_migrate() and domain_suspend() parameter lists will likely only
  continue to diverge over time, so it makes good sense to split them
  now)

No mechanism is introduced yet to enable library clients to induce a postcopy
live migration - this will follow after the libxl postcopy receiver logic.

Signed-off-by: Joshua Otto 
---
 docs/specs/libxl-migration-stream.pandoc | 19 -
 tools/libxl/libxl.h  |  7 
 tools/libxl/libxl_dom_save.c | 25 +++-
 tools/libxl/libxl_domain.c   | 29 +-
 tools/libxl/libxl_internal.h | 21 --
 tools/libxl/libxl_sr_stream_format.h | 13 +++---
 tools/libxl/libxl_stream_write.c | 69 ++--
 tools/xl/xl_migrate.c|  6 ++-
 8 files changed, 169 insertions(+), 20 deletions(-)

diff --git a/docs/specs/libxl-migration-stream.pandoc 
b/docs/specs/libxl-migration-stream.pandoc
index a1ba1ac..8d00cd7 100644
--- a/docs/specs/libxl-migration-stream.pandoc
+++ b/docs/specs/libxl-migration-stream.pandoc
@@ -2,7 +2,8 @@
 % Andrew Cooper <>
   Wen Congyang <>
   Yang Hongyang <>
-% Revision 2
+  Joshua Otto <>
+% Revision 3
 
 Introduction
 
@@ -123,7 +124,9 @@ type 0x: END
 
  0x0005: CHECKPOINT_STATE
 
- 0x0006 - 0x7FFF: Reserved for future _mandatory_
+ 0x0006: POSTCOPY_TRANSITION_END
+
+ 0x0007 - 0x7FFF: Reserved for future _mandatory_
  records.
 
  0x8000 - 0x: Reserved for future _optional_
@@ -304,6 +307,18 @@ While Secondary is running in below loop:
 b. Send _CHECKPOINT\_SVM\_SUSPENDED_ to primary
 4. Checkpoint
 
+POSTCOPY\_TRANSITION\_END
+-
+
+A postcopy transition end record marks the end of a postcopy transition in a
+libxl live migration stream.  It indicates that control of the stream should be
+returned to libxc for the postcopy memory migration phase.
+
+ 0 1 2 3 4 5 6 7 octet
++-+
+
+The postcopy transition end record contains no fields; its body_length is 0.
+
 Future Extensions
 =
 
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index cf8687a..5e48862 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1387,6 +1387,13 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, 
int fd,
 #define LIBXL_SUSPEND_DEBUG 1
 #define LIBXL_SUSPEND_LIVE 2
 
+int libxl_domain_live_migrate(libxl_ctx *ctx, uint32_t domid, int send_fd,
+  int flags, /* LIBXL_SUSPEND_* */
+  int recv_fd,
+  bool *postcopy_transitioned, /* OUT */
+  const libxl_asyncop_how *ao_how)
+  LIBXL_EXTERNAL_CALLERS_ONLY;
+
 /* @param suspend_cancel [from xenctrl.h:xc_domain_resume( @param fast )]
  *   If this parameter is true, use co-operative resume. The guest
  *   must support this.
diff --git a/tools/libxl/libxl_dom_save.c b/tools/libxl/libxl_dom_save.c
index eb1271e..75ab523 100644
--- a/tools/libxl/libxl_dom_save.c
+++ b/tools/libxl/libxl_dom_save.c
@@ -350,10 +350,31 @@ static int libxl__save_live_migration_precopy_policy(
 return XGS_POLICY_CONTINUE_PRECOPY;
 }
 
+static void postcopy_transition_done(libxl__egc *egc,
+ libxl__stream_write_state *sws, int rc);
+
 static void libxl__save_live_migration_postcopy_transition_callback(void *user)
 {
-/* XXX we're not yet ready to deal with this */
-assert(0);
+libxl__save_helper_state *shs = user;
+libxl__stream_write_state *sws = CONTAINER_OF(shs, *sws, shs);
+sws->postcopy_transition_callback = postcopy_transition_done;
+libxl__stream_write_start_postcopy_transition(shs->egc, sws);
+}
+
+static void postcopy_transition_done(libxl__egc *egc,
+ libxl__stream_write_state *sws,
+ int rc)
+{
+libxl__domain_save_state *dss = sws->dss;
+
+/* Past here, it's _possible_ that the domain may execute at the
+ * destination, so - 

[Xen-devel] [PATCH RFC v2 01/23] tools: rename COLO 'postcopy' to 'aftercopy'

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

The COLO xc domain save and restore procedures both make use of a 'postcopy'
callback to defer part of each checkpoint operation to xl.  In this context, the
name 'postcopy' is meant as "the callback invoked immediately after this
checkpoint's memory callback."  This is an unfortunate name collision with the
other common use of 'postcopy' in the context of live migration, where it is
used to mean "a memory migration that permits the guest to execute at the
destination before all of its memory is migrated by servicing accesses to
unmigrated memory via a network page-fault."

Mechanically rename 'postcopy' -> 'aftercopy' to free up the postcopy namespace
while preserving the original intent of the name in the COLO context.

No functional change.

Signed-off-by: Joshua Otto 
Acked-by: Zhang Chen 
---
 tools/libxc/include/xenguest.h | 4 ++--
 tools/libxc/xc_sr_restore.c| 4 ++--
 tools/libxc/xc_sr_save.c   | 4 ++--
 tools/libxl/libxl_colo_restore.c   | 2 +-
 tools/libxl/libxl_colo_save.c  | 2 +-
 tools/libxl/libxl_remus.c  | 2 +-
 tools/libxl/libxl_save_msgs_gen.pl | 2 +-
 7 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/tools/libxc/include/xenguest.h b/tools/libxc/include/xenguest.h
index 40902ee..aa8cc8b 100644
--- a/tools/libxc/include/xenguest.h
+++ b/tools/libxc/include/xenguest.h
@@ -53,7 +53,7 @@ struct save_callbacks {
  * xc_domain_save then flushes the output buffer, while the
  *  guest continues to run.
  */
-int (*postcopy)(void* data);
+int (*aftercopy)(void* data);
 
 /* Called after the memory checkpoint has been flushed
  * out into the network. Typical actions performed in this
@@ -115,7 +115,7 @@ struct restore_callbacks {
  * Callback function resumes the guest & the device model,
  * returns to xc_domain_restore.
  */
-int (*postcopy)(void* data);
+int (*aftercopy)(void* data);
 
 /* A checkpoint record has been found in the stream.
  * returns: */
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index 3549f0a..ee06b3d 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -576,7 +576,7 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
 ctx->restore.callbacks->data);
 
 /* Resume secondary vm */
-ret = ctx->restore.callbacks->postcopy(ctx->restore.callbacks->data);
+ret = ctx->restore.callbacks->aftercopy(ctx->restore.callbacks->data);
 HANDLE_CALLBACK_RETURN_VALUE(ret);
 
 /* Wait for a new checkpoint */
@@ -855,7 +855,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, 
uint32_t dom,
 {
 /* this is COLO restore */
 assert(callbacks->suspend &&
-   callbacks->postcopy &&
+   callbacks->aftercopy &&
callbacks->wait_checkpoint &&
callbacks->restore_results);
 }
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index ca6913b..3837bc1 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -863,7 +863,7 @@ static int save(struct xc_sr_context *ctx, uint16_t 
guest_type)
 }
 }
 
-rc = ctx->save.callbacks->postcopy(ctx->save.callbacks->data);
+rc = ctx->save.callbacks->aftercopy(ctx->save.callbacks->data);
 if ( rc <= 0 )
 goto err;
 
@@ -951,7 +951,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t 
dom,
 if ( hvm )
 assert(callbacks->switch_qemu_logdirty);
 if ( ctx.save.checkpointed )
-assert(callbacks->checkpoint && callbacks->postcopy);
+assert(callbacks->checkpoint && callbacks->aftercopy);
 if ( ctx.save.checkpointed == XC_MIG_STREAM_COLO )
 assert(callbacks->wait_checkpoint);
 
diff --git a/tools/libxl/libxl_colo_restore.c b/tools/libxl/libxl_colo_restore.c
index 0c535bd..7d8f9ff 100644
--- a/tools/libxl/libxl_colo_restore.c
+++ b/tools/libxl/libxl_colo_restore.c
@@ -246,7 +246,7 @@ void libxl__colo_restore_setup(libxl__egc *egc,
 if (init_dsps(>dsps))
 goto out;
 
-callbacks->postcopy = libxl__colo_restore_domain_resume_callback;
+callbacks->aftercopy = libxl__colo_restore_domain_resume_callback;
 callbacks->wait_checkpoint = 
libxl__colo_restore_domain_wait_checkpoint_callback;
 callbacks->suspend = libxl__colo_restore_domain_suspend_callback;
 callbacks->checkpoint = libxl__colo_restore_domain_checkpoint_callback;
diff --git a/tools/libxl/libxl_colo_save.c b/tools/libxl/libxl_colo_save.c
index f687d5a..5921196 100644
--- a/tools/libxl/libxl_colo_save.c
+++ b/tools/libxl/libxl_colo_save.c
@@ -145,7 +145,7 @@ void libxl__colo_save_setup(libxl__egc *egc, 
libxl__colo_save_state *css)
 
 callbacks->suspend = libxl__colo_save_domain_suspend_callback;
 callbacks->checkpoint = libxl__colo_save_domain_checkpoint_callback;
-

[Xen-devel] [PATCH RFC v2 09/23] libxl/migration: wire up the precopy policy RPC callback

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

Permit libxl to implement the xc_domain_save() precopy_policy callback
by adding it to the RPC generation machinery and implementing a policy
in libxl with the same semantics as the old one.

No functional change.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_sr_save.c   | 17 +
 tools/libxl/libxl_dom_save.c   | 23 +++
 tools/libxl/libxl_save_msgs_gen.pl |  4 +++-
 3 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index 55b77ff..48d403b 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -1001,17 +1001,6 @@ static int save(struct xc_sr_context *ctx, uint16_t 
guest_type)
 return rc;
 };
 
-static int simple_precopy_policy(struct precopy_stats stats, void *user)
-{
-if (stats.dirty_count >= 0 && stats.dirty_count < 50)
-return XGS_POLICY_STOP_AND_COPY;
-
-if (stats.iteration >= 5)
-return XGS_POLICY_STOP_AND_COPY;
-
-return XGS_POLICY_CONTINUE_PRECOPY;
-}
-
 int xc_domain_save(xc_interface *xch, const struct domain_save_params *params,
const struct save_callbacks* callbacks)
 {
@@ -1021,12 +1010,8 @@ int xc_domain_save(xc_interface *xch, const struct 
domain_save_params *params,
 .fd = params->save_fd,
 };
 
-/* XXX use this to shim our precopy_policy in before moving it to libxl */
-struct save_callbacks overridden_callbacks = *callbacks;
-overridden_callbacks.precopy_policy = simple_precopy_policy;
-
 /* GCC 4.4 (of CentOS 6.x vintage) can' t initialise anonymous unions. */
-ctx.save.callbacks = _callbacks;
+ctx.save.callbacks = callbacks;
 ctx.save.live  = params->live;
 ctx.save.debug = params->debug;
 ctx.save.checkpointed = params->stream_type;
diff --git a/tools/libxl/libxl_dom_save.c b/tools/libxl/libxl_dom_save.c
index c27813a..b65135d 100644
--- a/tools/libxl/libxl_dom_save.c
+++ b/tools/libxl/libxl_dom_save.c
@@ -328,6 +328,28 @@ int 
libxl__save_emulator_xenstore_data(libxl__domain_save_state *dss,
 return rc;
 }
 
+/*
+ * This is the live migration precopy policy - it's called periodically during
+ * the precopy phase of live migrations, and is responsible for deciding when
+ * the precopy phase should terminate and what should be done next.
+ *
+ * The policy implemented here behaves identically to the policy previously
+ * hard-coded into xc_domain_save() - it proceeds to the stop-and-copy phase of
+ * the live migration when there are either fewer than 50 dirty pages, or more
+ * than 5 precopy rounds have completed.
+ */
+static int libxl__save_live_migration_precopy_policy(
+struct precopy_stats stats, void *user)
+{
+if (stats.dirty_count >= 0 && stats.dirty_count < 50)
+return XGS_POLICY_STOP_AND_COPY;
+
+if (stats.iteration >= 5)
+return XGS_POLICY_STOP_AND_COPY;
+
+return XGS_POLICY_CONTINUE_PRECOPY;
+}
+
 /*- main code for saving, in order of execution -*/
 
 void libxl__domain_save(libxl__egc *egc, libxl__domain_save_state *dss)
@@ -390,6 +412,7 @@ void libxl__domain_save(libxl__egc *egc, 
libxl__domain_save_state *dss)
 if (dss->checkpointed_stream == LIBXL_CHECKPOINTED_STREAM_NONE)
 callbacks->suspend = libxl__domain_suspend_callback;
 
+callbacks->precopy_policy = libxl__save_live_migration_precopy_policy;
 callbacks->switch_qemu_logdirty = 
libxl__domain_suspend_common_switch_qemu_logdirty;
 
 dss->sws.ao  = dss->ao;
diff --git a/tools/libxl/libxl_save_msgs_gen.pl 
b/tools/libxl/libxl_save_msgs_gen.pl
index 27845bb..50c97b4 100755
--- a/tools/libxl/libxl_save_msgs_gen.pl
+++ b/tools/libxl/libxl_save_msgs_gen.pl
@@ -33,6 +33,7 @@ our @msgs = (
   'xen_pfn_t', 'console_gfn'] ],
 [  9, 'srW',"complete",  [qw(int retval
  int errnoval)] ],
+[ 10, 'scxW',   "precopy_policy", ['struct precopy_stats', 'stats'] ]
 );
 
 #
@@ -141,7 +142,8 @@ static void bytes_put(unsigned char *const buf, int *len,
 
 END
 
-foreach my $simpletype (qw(int uint16_t uint32_t unsigned), 'unsigned long', 
'xen_pfn_t') {
+foreach my $simpletype (qw(int uint16_t uint32_t unsigned),
+'unsigned long', 'xen_pfn_t', 'struct precopy_stats') {
 my $typeid = typeid($simpletype);
 $out_body{'callout'} .= 

[Xen-devel] [PATCH RFC v2 23/23] libxc/xc_sr_restore.c: use populate_evicted()

2018-06-17 Thread Joshua Otto
From: Joshua Otto 

During the transition downtime phase of postcopy live migration, mark
batches of dirty pfns as paged-out using the new populate_evicted()
paging op rather than populating, nominating and evicting each dirty pfn
individually.  This significantly reduces downtime during transitions
with many dirty pfns.

Signed-off-by: Joshua Otto 
---
 tools/libxc/xc_sr_restore.c | 71 +
 1 file changed, 46 insertions(+), 25 deletions(-)

diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index 3aac0f0..950bbf0 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -672,13 +672,15 @@ static int process_postcopy_pfns(struct xc_sr_context 
*ctx, unsigned int count,
 xc_interface *xch = ctx->xch;
 struct xc_sr_restore_paging *paging = >restore.paging;
 int rc;
-unsigned int i;
+unsigned int i, nr_bpfns = 0, nr_xapfns = 0;
 xen_pfn_t bpfn;
+xen_pfn_t *bpfns = malloc(count * sizeof(*bpfns)),
+  *xapfns = malloc(count * sizeof(*xapfns));
 
-rc = populate_pfns(ctx, count, pfns, types);
-if ( rc )
+if ( !bpfns || !xapfns )
 {
-ERROR("Failed to populate pfns for batch of %u pages", count);
+rc = -1;
+ERROR("Failed to allocate %zu bytes pfns", 2 * count * sizeof(*bpfns));
 goto out;
 }
 
@@ -686,7 +688,7 @@ static int process_postcopy_pfns(struct xc_sr_context *ctx, 
unsigned int count,
 {
 if ( types[i] < XEN_DOMCTL_PFINFO_BROKEN )
 {
-bpfn = pfns[i];
+bpfn = bpfns[nr_bpfns++] = pfns[i];
 
 /* We should never see the same pfn twice at this stage.  */
 if ( !postcopy_pfn_invalid(ctx, bpfn) )
@@ -695,6 +697,42 @@ static int process_postcopy_pfns(struct xc_sr_context 
*ctx, unsigned int count,
 ERROR("Duplicate postcopy pfn %"PRI_xen_pfn, bpfn);
 goto out;
 }
+}
+else if ( types[i] == XEN_DOMCTL_PFINFO_XALLOC )
+{
+xapfns[nr_xapfns++] = pfns[i];
+}
+}
+
+/* Follow the normal path to populate XALLOC pfns... */
+rc = populate_pfns(ctx, nr_xapfns, xapfns, NULL);
+if ( rc )
+{
+ERROR("Failed to populate pfns for batch of %u pages", nr_xapfns);
+goto out;
+}
+
+/* ... and 'populate' the backed pfns directly into the evicted state. */
+if ( nr_bpfns )
+{
+rc = xc_mem_paging_populate_evicted(xch, ctx->domid, bpfns, nr_bpfns);
+if ( rc )
+{
+ERROR("Failed to evict batch of %u pfns", nr_bpfns);
+goto out;
+}
+
+for ( i = 0; i < nr_bpfns; ++i )
+{
+bpfn = bpfns[i];
+
+/* If it hasn't yet been populated, mark it as so now. */
+if ( !pfn_is_populated(ctx, bpfn) )
+{
+rc = pfn_set_populated(ctx, bpfn);
+if ( rc )
+goto out;
+}
 
 /*
  * We now consider this pfn 'outstanding' - pending, and not yet
@@ -702,32 +740,15 @@ static int process_postcopy_pfns(struct xc_sr_context 
*ctx, unsigned int count,
  */
 mark_postcopy_pfn_outstanding(ctx, bpfn);
 ++paging->nr_pending_pfns;
-
-/*
- * Neither nomination nor eviction can be permitted to fail - the
- * guest isn't yet running, so a failure would imply a foreign or
- * hypervisor mapping on the page, and that would be bogus because
- * the migration isn't yet complete.
- */
-rc = xc_mem_paging_nominate(xch, ctx->domid, bpfn);
-if ( rc < 0 )
-{
-PERROR("Error nominating postcopy pfn %"PRI_xen_pfn, bpfn);
-goto out;
-}
-
-rc = xc_mem_paging_evict(xch, ctx->domid, bpfn);
-if ( rc < 0 )
-{
-PERROR("Error evicting postcopy pfn %"PRI_xen_pfn, bpfn);
-goto out;
-}
 }
 }
 
 rc = 0;
 
  out:
+free(bpfns);
+free(xapfns);
+
 return rc;
 }
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] Patch "x86/xen: Reset VCPU0 info pointer after shared_info remap" has been added to the 4.16-stable tree

2018-06-17 Thread gregkh

This is a note to let you know that I've just added the patch titled

x86/xen: Reset VCPU0 info pointer after shared_info remap

to the 4.16-stable tree which can be found at:

http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
 x86-xen-reset-vcpu0-info-pointer-after-shared_info-remap.patch
and it can be found in the queue-4.16 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let  know about it.


From foo@baz Sun Jun 17 12:07:34 CEST 2018
From: "van der Linden, Frank" 
Date: Fri, 4 May 2018 16:11:00 -0400
Subject: x86/xen: Reset VCPU0 info pointer after shared_info remap

From: "van der Linden, Frank" 

[ Upstream commit d1ecfa9d1f402366b1776fbf84e635678a51414f ]

This patch fixes crashes during boot for HVM guests on older (pre HVM
vector callback) Xen versions. Without this, current kernels will always
fail to boot on those Xen versions.

Sample stack trace:

   BUG: unable to handle kernel paging request at ff20
   IP: __xen_evtchn_do_upcall+0x1e/0x80
   PGD 1e0e067 P4D 1e0e067 PUD 1e10067 PMD 235c067 PTE 0
Oops: 0002 [#1] SMP PTI
   Modules linked in:
   CPU: 0 PID: 512 Comm: kworker/u2:0 Not tainted 4.14.33-52.13.amzn1.x86_64 #1
   Hardware name: Xen HVM domU, BIOS 3.4.3.amazon 11/11/2016
   task: 88002531d700 task.stack: c948
   RIP: 0010:__xen_evtchn_do_upcall+0x1e/0x80
   RSP: :880025403ef0 EFLAGS: 00010046
   RAX: 813cc760 RBX: ff20 RCX: c9483ef0
   RDX: 880020540a00 RSI: 880023c78000 RDI: 001c
   RBP: 0001 R08:  R09: 
   R10:  R11:  R12: 
   R13: 880025403f5c R14:  R15: 
   FS:  () GS:88002540() knlGS:
   CS:  0010 DS:  ES:  CR0: 80050033
   CR2: ff20 CR3: 01e0a000 CR4: 06f0
Call Trace:
   
   do_hvm_evtchn_intr+0xa/0x10
   __handle_irq_event_percpu+0x43/0x1a0
   handle_irq_event_percpu+0x20/0x50
   handle_irq_event+0x39/0x60
   handle_fasteoi_irq+0x80/0x140
   handle_irq+0xaf/0x120
   do_IRQ+0x41/0xd0
   common_interrupt+0x7d/0x7d
   

During boot, the HYPERVISOR_shared_info page gets remapped to make it work
with KASLR. This means that any pointer derived from it needs to be
adjusted.

The only value that this applies to is the vcpu_info pointer for VCPU 0.
For PV and HVM with the callback vector feature, this gets done via the
smp_ops prepare_boot_cpu callback. Older Xen versions do not support the
HVM callback vector, so there is no Xen-specific smp_ops set up in that
scenario. So, the vcpu_info pointer for VCPU 0 never gets set to the proper
value, and the first reference of it will be bad. Fix this by resetting it
immediately after the remap.

Signed-off-by: Frank van der Linden 
Reviewed-by: Eduardo Valentin 
Reviewed-by: Alakesh Haloi 
Reviewed-by: Vallish Vaidyeshwara 
Reviewed-by: Boris Ostrovsky 
Cc: Juergen Gross 
Cc: Boris Ostrovsky 
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Boris Ostrovsky 
Signed-off-by: Sasha Levin 
Signed-off-by: Greg Kroah-Hartman 
---
 arch/x86/xen/enlighten_hvm.c |   13 +
 1 file changed, 13 insertions(+)

--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -65,6 +65,19 @@ static void __init xen_hvm_init_mem_mapp
 {
early_memunmap(HYPERVISOR_shared_info, PAGE_SIZE);
HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
+
+   /*
+* The virtual address of the shared_info page has changed, so
+* the vcpu_info pointer for VCPU 0 is now stale.
+*
+* The prepare_boot_cpu callback will re-initialize it via
+* xen_vcpu_setup, but we can't rely on that to be called for
+* old Xen versions (xen_have_vector_callback == 0).
+*
+* It is, in any case, bad to have a stale vcpu_info pointer
+* so reset it now.
+*/
+   xen_vcpu_info_reset(0);
 }
 
 static void __init init_hvm_pv_info(void)


Patches currently in stable-queue which might be from fllin...@amazon.com are

queue-4.16/x86-xen-reset-vcpu0-info-pointer-after-shared_info-remap.patch

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH RFC] x86/resume: take care of fully eager FPU around system suspend

2018-06-17 Thread Jan Beulich
Just like in the HVM emulation and EFI runtime call cases we must not
set CR0.TS here in fully eager mode. Note that idle vCPU-s never have
->arch.fully_eager_fpu set (for their initialization not going through
vcpu_init_fpu()), so we won't hit the respective ASSERT() in
vcpu_restore_fpu_eager(). 

Signed-off-by: Jan Beulich 
---
RFC: Not even compile tested, as I'm writing this from home. Also please
 excuse the formatting (hence the attachment) - our mail web frontend
  doesn't allow anything better.

--- a/xen/arch/x86/acpi/suspend.c
+++ b/xen/arch/x86/acpi/suspend.c
@@ -92,8 +92,11 @@ void restore_rest_processor_state(void)
 write_debugreg(7, curr->arch.debugreg[7]);
 }
 
-/* Reload FPU state on next FPU use. */
-stts();
+/* Reload FPU state immediately or on next FPU use. */
+if ( curr->arch.fully_eager_fpu )
+vcpu_restore_fpu_eager(curr);
+else
+stts();
 
 if (cpu_has_pat)
 wrmsrl(MSR_IA32_CR_PAT, host_pat);

x86/resume: take care of fully eager FPU around system suspend

Just like in the HVM emulation and EFI runtime call cases we must not
set CR0.TS here in fully eager mode. Note that idle vCPU-s never have
->arch.fully_eager_fpu set (for their initialization not going through
vcpu_init_fpu()), so we won't hit the respective ASSERT() in
vcpu_restore_fpu_eager(). 

Signed-off-by: Jan Beulich 
---
RFC: Not even compile tested, as I'm writing this from home.

--- a/xen/arch/x86/acpi/suspend.c
+++ b/xen/arch/x86/acpi/suspend.c
@@ -92,8 +92,11 @@ void restore_rest_processor_state(void)
 write_debugreg(7, curr->arch.debugreg[7]);
 }
 
-/* Reload FPU state on next FPU use. */
-stts();
+/* Reload FPU state immediately or on next FPU use. */
+if ( curr->arch.fully_eager_fpu )
+vcpu_restore_fpu_eager(curr);
+else
+stts();
 
 if (cpu_has_pat)
 wrmsrl(MSR_IA32_CR_PAT, host_pat);
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 124225: regressions - FAIL

2018-06-17 Thread osstest service owner
flight 124225 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124225/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 124090
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
REGR. vs. 124090
 test-amd64-i386-xl-qemut-debianhvm-amd64 16 guest-localmigrate/x10 fail REGR. 
vs. 124090

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 124057
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 124057
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 124057
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 124090
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 124090
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 124090
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 124090
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 124090
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 124090
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 124090
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  e23d2234e08872ac1c719f3e338994581483440f
baseline version:
 xen  11535cdbc0ae5925a55b3e735447c30faaa6f63b

Last test of basis   124090  2018-06-12 01:51:41 Z5 days
Failing since124140  2018-06-12 17:06:49 Z4 days4 attempts
Testing same since   124225  2018-06-15 22:20:47 Z1 days 

Re: [Xen-devel] Status of comet-4.10 branch

2018-06-17 Thread Jan Beulich
>>> Ian Jackson  06/15/18 6:26 PM >>>
>The right approach to this depends on whether the functionality in the
>comet and shim branches is now in released Xen branches.  Should comet
>4.10 be retired in favour of stable-4.10 or RELEASE-4.10.1 ?

It is my understanding that with the merging of the shim code into the 4.10
branch, the separate branch became obsolete.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-4.9 test] 124223: regressions - FAIL

2018-06-17 Thread osstest service owner
flight 124223 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124223/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  7 xen-bootfail REGR. vs. 122969
 test-amd64-amd64-libvirt-xsm  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-shadow7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemuu-win10-i386  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemut-win10-i386  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemut-ws16-amd64  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemut-debianhvm-amd64  7 xen-bootfail REGR. vs. 122969
 test-amd64-amd64-libvirt-vhd  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-libvirt  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-qemuu-nested-intel  7 xen-boot  fail REGR. vs. 122969
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 7 xen-boot fail REGR. vs. 
122969
 test-amd64-amd64-xl-pvhv2-intel  7 xen-boot  fail REGR. vs. 122969
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-xsm   7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-libvirt-pair 10 xen-boot/src_host   fail REGR. vs. 122969
 test-amd64-amd64-libvirt-pair 11 xen-boot/dst_host   fail REGR. vs. 122969
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 122969
 test-amd64-amd64-xl-qemuu-ws16-amd64  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemuu-win7-amd64  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. 
vs. 122969
 test-amd64-amd64-xl-qemuu-ovmf-amd64  7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 122969
 test-amd64-amd64-xl   7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 122969
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-xl-pvshim7 xen-boot fail REGR. vs. 122969
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
122969
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 122969
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 122969
 test-amd64-amd64-examine  8 reboot   fail REGR. vs. 122969
 build-i386-libvirt6 libvirt-build  fail in 124190 REGR. vs. 122969

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw  6 xen-installfail pass in 124190

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  7 xen-boot fail REGR. vs. 122969

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt   1 build-check(1)   blocked in 124190 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked in 124190 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked in 124190 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 
124190 n/a
 test-armhf-armhf-libvirt-raw 12 migrate-support-check fail in 124190 never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail in 124190 never 
pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 122969
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 122969
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass