[Xen-devel] [libvirt test] 112119: tolerable all pass - PUSHED

2017-07-22 Thread osstest service owner
flight 112119 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112119/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 112081
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 112081
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 112081
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  7cc30e0ed7ebf54a4db592ec1fdb6063ec788b75
baseline version:
 libvirt  e04d1074f801a211e2767545e2816cc98d820dd3

Last test of basis   112081  2017-07-21 04:21:50 Z1 days
Testing same since   112119  2017-07-22 04:20:13 Z0 days1 attempts


People who touched revisions under test:
  Andrea Bolognani 
  dann frazier 
  John Ferlan 
  Michal Privoznik 
  Shivaprasad G Bhat 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master

Re: [Xen-devel] [PATCH v5 6/8] mm: Keep heap accessible to others while scrubbing

2017-07-22 Thread Boris Ostrovsky



On 06/27/2017 03:28 PM, Jan Beulich wrote:

Boris Ostrovsky  06/22/17 8:56 PM >>>

Changes in v5:
* Fixed off-by-one error in setting first_dirty
* Changed struct page_info.u.free to a union to permit use of ACCESS_ONCE in
   check_and_stop_scrub()


I don't see the need for this:


+static void check_and_stop_scrub(struct page_info *head)
+{
+if ( head->u.free.scrub_state == BUDDY_SCRUBBING )
+{
+struct page_info pg;
+
+head->u.free.scrub_state = BUDDY_SCRUB_ABORT;
+spin_lock_kick();
+for ( ; ; )
+{
+/* Can't ACCESS_ONCE() a bitfield. */
+pg.u.free.val = ACCESS_ONCE(head->u.free.val);


Something like ACCESS_ONCE(head->u.free).val ought to work (or read_atomic(),
due to the questionable scalar type check in ACCESS_ONCE()).


Hmm... I couldn't get this to work with either suggestion.

page_alloc.c:751:13: error: conversion to non-scalar type requested
 pg.u.free = read_atomic(>u.free);

page_alloc.c:753:6: error: conversion to non-scalar type requested
  if ( ACCESS_ONCE(head->u.free).scrub_state != BUDDY_SCRUB_ABORT )



@@ -1106,25 +1155,53 @@ bool scrub_free_pages(void)
  do {
  while ( !page_list_empty((node, zone, order)) )
  {
-unsigned int i;
+unsigned int i, dirty_cnt;
+struct scrub_wait_state st;
  
  /* Unscrubbed pages are always at the end of the list. */

  pg = page_list_last((node, zone, order));
  if ( pg->u.free.first_dirty == INVALID_DIRTY_IDX )
  break;
  
+ASSERT(!pg->u.free.scrub_state);


Please use BUDDY_NOT_SCRUBBING here.


@@ -1138,6 +1215,17 @@ bool scrub_free_pages(void)
  }
  }
  
+st.pg = pg;

+st.first_dirty = (i >= (1UL << order) - 1) ?
+INVALID_DIRTY_IDX : i + 1;


Would you mind explaining to me (again?) why you can't set pg's first_dirty
directly here? In case I'm not mistaken and this has been asked before, maybe
this is a hint that a comment might be warranted.



In get_free_buddy() (formerly part of alloc_heap_pages()) I have

   /* Find smallest order which can satisfy the request. */
for ( j = order; j <= MAX_ORDER; j++ )
{
if ( (pg = page_list_remove_head((node, zone, j))) )
{
if ( pg->u.free.first_dirty == INVALID_DIRTY_IDX )
return pg;
/*
 * We grab single pages (order=0) even if they are
 * unscrubbed. Given that scrubbing one page is 
fairly quick

 * it is not worth breaking higher orders.
 */
if ( (order == 0) || use_unscrubbed )
{
check_and_stop_scrub(pg);
return pg;
}


If first_dirty gets assigned INVALID_DIRTY_IDX then get_free_buddy() 
will return pg right away without telling the scrubber that the buddy 
has been taken for use. The scrubber will then put the buddy back on the 
heap.



-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline bisection] complete build-i386

2017-07-22 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job build-i386
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  5ba3d7564593c55292056ef5af84d50b55ebcf0e
  Bug not present: 759235653de427e4e7b62d8e6fb1ef9cb68bac7d
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/112194/


  commit 5ba3d7564593c55292056ef5af84d50b55ebcf0e
  Author: Igor Druzhinin 
  Date:   Mon Jul 10 23:40:02 2017 +0100
  
  xen/mapcache: introduce xen_replace_cache_entry()
  
  This new call is trying to update a requested map cache entry
  according to the changes in the physmap. The call is searching
  for the entry, unmaps it and maps again at the same place using
  a new guest address. If the mapping is dummy this call will
  make it real.
  
  This function makes use of a new xenforeignmemory_map2() call
  with an extended interface that was recently introduced in
  libxenforeignmemory [1].
  
  [1] https://www.mail-archive.com/xen-devel@lists.xen.org/msg113007.html
  
  Signed-off-by: Igor Druzhinin 
  Reviewed-by: Paul Durrant 
  Reviewed-by: Stefano Stabellini 
  Signed-off-by: Stefano Stabellini 


*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  b3e46a89147493d4474dafe983befca2d6500275
  Bug not present: a51568b78ea011e0f1e67664b8b0c6b693f8ee5a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/112183/


  commit b3e46a89147493d4474dafe983befca2d6500275
  Merge: a51568b 331b518
  Author: Peter Maydell 
  Date:   Wed Jul 19 16:31:08 2017 +0100
  
  Merge remote-tracking branch 'remotes/sstabellini/tags/xen-20170718-tag' 
into staging
  
  Xen 2017/07/18
  
  # gpg: Signature made Tue 18 Jul 2017 23:18:16 BST
  # gpg:using RSA key 0x894F8F4870E1AE90
  # gpg: Good signature from "Stefano Stabellini 
"
  # gpg: aka "Stefano Stabellini "
  # Primary key fingerprint: D04E 33AB A51F 67BA 07D3  0AEA 894F 8F48 70E1 
AE90
  
  * remotes/sstabellini/tags/xen-20170718-tag:
xen: don't use xenstore to save/restore physmap anymore
xen/mapcache: introduce xen_replace_cache_entry()
xen/mapcache: add an ability to create dummy mappings
xen: move physmap saving into a separate function
xen-platform: separate unplugging of NVMe disks
xen_pt_msi.c: Check for xen_host_pci_get_* failures in 
xen_pt_msix_init()
hw/xen: Set emu_mask for igd_opregion register
  
  Signed-off-by: Peter Maydell 
  
  commit 331b5189d756d431b1d18ae7097527ba3d3ea809
  Author: Igor Druzhinin 
  Date:   Mon Jul 10 23:40:03 2017 +0100
  
  xen: don't use xenstore to save/restore physmap anymore
  
  If we have a system with xenforeignmemory_map2() implemented
  we don't need to save/restore physmap on suspend/restore
  anymore. In case we resume a VM without physmap - try to
  recreate the physmap during memory region restore phase and
  remap map cache entries accordingly. The old code is left
  for compatibility reasons.
  
  Signed-off-by: Igor Druzhinin 
  Reviewed-by: Paul Durrant 
  Reviewed-by: Stefano Stabellini 
  Signed-off-by: Stefano Stabellini 
  
  commit 5ba3d7564593c55292056ef5af84d50b55ebcf0e
  Author: Igor Druzhinin 
  Date:   Mon Jul 10 23:40:02 2017 +0100
  
  xen/mapcache: introduce xen_replace_cache_entry()
  
  This new call is trying to update a requested map cache entry
  according to the changes in the physmap. The call is searching
  for the entry, unmaps it and maps again at the same place using
  a new guest address. If the mapping is dummy this call will
  make it real.
  
  This function makes use of a new xenforeignmemory_map2() call
  with an extended interface that was recently introduced in
  libxenforeignmemory [1].
  
  [1] https://www.mail-archive.com/xen-devel@lists.xen.org/msg113007.html
  
  Signed-off-by: Igor Druzhinin 
  Reviewed-by: Paul Durrant 
  Reviewed-by: Stefano Stabellini 
  Signed-off-by: Stefano Stabellini 
  
  commit 759235653de427e4e7b62d8e6fb1ef9cb68bac7d
  Author: Igor Druzhinin 

[Xen-devel] [linux-linus bisection] complete test-amd64-amd64-xl-qemut-debianhvm-amd64

2017-07-22 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemut-debianhvm-amd64
testid xen-boot

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  921edf312a6a20be16cf2b60e0dec3dce35e5cb9
  Bug not present: 3c2bfbaadff6e0c257bb6b16c9c97f43618b13dc
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/112191/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-amd64-xl-qemut-debianhvm-amd64.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-amd64-xl-qemut-debianhvm-amd64.xen-boot
 --summary-out=tmp/112191.bisection-summary --basis-template=110515 
--blessings=real,real-bisect linux-linus 
test-amd64-amd64-xl-qemut-debianhvm-amd64 xen-boot
Searching for failure / basis pass:
 112083 fail [host=chardonnay0] / 111739 [host=rimava1] 111714 [host=godello0] 
111677 [host=baroque0] 111654 [host=godello1] 111635 [host=rimava0] 111611 
[host=huxelrebe0] 111580 [host=elbling0] 111529 [host=baroque1] 111493 
[host=chardonnay1] 111416 [host=fiano0] 111383 [host=merlot1] 111374 
[host=italia1] 111363 [host=pinot0] 111332 [host=italia0] 111280 
[host=nobling0] 111222 [host=huxelrebe1] 83 [host=nobling1] 48 ok.
Failure / basis pass flights: 112083 / 48
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 921edf312a6a20be16cf2b60e0dec3dce35e5cb9 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
414d069b38ab114b89085e44989bf57604ea86d7 
d535d8922f571502252deaf607e82e7475cd1728
Basis pass 3c2bfbaadff6e0c257bb6b16c9c97f43618b13dc 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
414d069b38ab114b89085e44989bf57604ea86d7 
695bb5f504ab48c1d546446f104c1b6c0ead126d
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#3c2bfbaadff6e0c257bb6b16c9c97f43618b13dc-921edf312a6a20be16cf2b60e0dec3dce35e5cb9
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#8051789e982499050680a26febeada7467e18a8d-8051789e982499050680a26febeada7467e18a8d
 
git://xenbits.xen.org/qemu-xen.git#414d069b38ab114b89085e44989bf57604ea86d7-414d069b38ab114b89085e44989bf57604ea86d7
 
git://xenbits.xen.org/xen.git#695bb5f504ab48c1d546446f104c1b6c0ead126d-d535d8922f571502252deaf607e82e7475cd1728
adhoc-revtuple-generator: tree discontiguous: linux-2.6
Loaded 1002 nodes in revision graph
Searching for test results:
 110464 [host=nobling1]
 110486 [host=nobling0]
 110515 [host=elbling0]
 110547 [host=rimava1]
 110536 [host=baroque1]
 110560 [host=italia1]
 110908 [host=fiano0]
 110950 [host=rimava0]
 110984 [host=pinot1]
 111081 [host=godello1]
 24 [host=godello0]
 48 pass 3c2bfbaadff6e0c257bb6b16c9c97f43618b13dc 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
414d069b38ab114b89085e44989bf57604ea86d7 
695bb5f504ab48c1d546446f104c1b6c0ead126d
 111280 [host=nobling0]
 83 [host=nobling1]
 111222 [host=huxelrebe1]
 111332 [host=italia0]
 111363 [host=pinot0]
 111374 [host=italia1]
 111383 [host=merlot1]
 111416 [host=fiano0]
 111493 [host=chardonnay1]
 111529 [host=baroque1]
 111580 [host=elbling0]
 111611 [host=huxelrebe0]
 111635 [host=rimava0]
 111654 [host=godello1]
 111677 [host=baroque0]
 111714 [host=godello0]
 111739 [host=rimava1]
 111771 fail irrelevant
 111800 fail irrelevant
 111831 fail irrelevant
 111866 fail irrelevant
 111939 fail irrelevant
 111972 fail irrelevant
 112019 fail irrelevant
 111995 fail irrelevant
 112049 fail irrelevant
 112083 fail 921edf312a6a20be16cf2b60e0dec3dce35e5cb9 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
414d069b38ab114b89085e44989bf57604ea86d7 
d535d8922f571502252deaf607e82e7475cd1728
 112136 pass 3c2bfbaadff6e0c257bb6b16c9c97f43618b13dc 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 

Re: [Xen-devel] [PATCH v5 4/8] mm: Scrub memory from idle loop

2017-07-22 Thread Boris Ostrovsky





@@ -1050,17 +1120,42 @@ static void scrub_free_pages(unsigned int node)
  scrub_one_page([i]);
  pg[i].count_info &= ~PGC_need_scrub;
  node_need_scrub[node]--;
+cnt += 100; /* scrubbed pages add heavier weight. */
+}
+else
+cnt++;
+
+/*
+ * Scrub a few (8) pages before becoming eligible for
+ * preemption. But also count non-scrubbing loop iterations
+ * so that we don't get stuck here with an almost clean
+ * heap.
+ */
+if ( cnt > 800 && softirq_pending(cpu) )
+{
+preempt = true;
+break;
  }
  }
  
-page_list_del(pg, (node, zone, order));

-page_list_add_scrub(pg, node, zone, order, INVALID_DIRTY_IDX);
+if ( i >= (1U << order) - 1 )
+{
+page_list_del(pg, (node, zone, order));
+page_list_add_scrub(pg, node, zone, order, 
INVALID_DIRTY_IDX);
+}
+else
+pg->u.free.first_dirty = i + 1;
  
-if ( node_need_scrub[node] == 0 )

-return;
+if ( preempt || (node_need_scrub[node] == 0) )
+goto out;
  }
  } while ( order-- != 0 );
  }
+
+ out:
+spin_unlock(_lock);
+node_clear(node, node_scrubbing);
+return softirq_pending(cpu) || (node_to_scrub(false) != NUMA_NO_NODE);


While I can see why you use it here, the softirq_pending() looks sort of
misplaced: While invoking it twice in the caller will look a little odd too,
I still think that's where the check belongs.



scrub_free_pages is called from idle loop as

else if ( !softirq_pending(cpu) && !scrub_free_pages() )
pm_idle();

so softirq_pending() is unnecessary here.

(Not sure why you are saying it would be invoked twice)

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 3/8] mm: Scrub pages in alloc_heap_pages() if needed

2017-07-22 Thread Boris Ostrovsky



On 06/27/2017 02:00 PM, Jan Beulich wrote:

Boris Ostrovsky  06/22/17 8:55 PM >>>

@@ -862,10 +879,19 @@ static struct page_info *alloc_heap_pages(
  if ( d != NULL )
  d->last_alloc_node = node;
  
+need_scrub = !!first_dirty_pg && !(memflags & MEMF_no_scrub);


No need for !! here. But I wonder whether that part of the check is really
useful anyway, considering the sole use ...


  for ( i = 0; i < (1 << order); i++ )
  {
  /* Reference count must continuously be zero for free pages. */
-BUG_ON(pg[i].count_info != PGC_state_free);
+BUG_ON((pg[i].count_info & ~PGC_need_scrub) != PGC_state_free);
+
+if ( test_bit(_PGC_need_scrub, [i].count_info) )
+{
+if ( need_scrub )
+scrub_one_page([i]);


... here. If it isn't, I think the local variable isn't warranted either.
If you agree, the thus adjusted patch can have
Reviewed-by: Jan Beulich 
(otherwise I'll wait with it to understand the reason first).




first_dirty_pg is indeed unnecessary but I think local variable is 
useful to avoid ANDing memflags inside the loop on each iteration 
(unless you think compiler is smart enough to realize that memflags is 
not changing).



-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 1/8] mm: Place unscrubbed pages at the end of pagelist

2017-07-22 Thread Boris Ostrovsky



On 06/27/2017 01:06 PM, Jan Beulich wrote:

Boris Ostrovsky  06/22/17 8:55 PM >>>

I kept node_need_scrub[] as a global array and not a "per-node". I think 
splitting
it should be part of making heap_lock a per-node lock, together with increasing
scrub concurrency by having more than one CPU scrub a node.


Agreed - I hadn't meant my earlier comment to necessarily affect this series.


@@ -798,11 +814,26 @@ static struct page_info *alloc_heap_pages(
  return NULL;
  
   found:

+
+if ( pg->u.free.first_dirty != INVALID_DIRTY_IDX )
+first_dirty_pg = pg + pg->u.free.first_dirty;
+
  /* We may have to halve the chunk a number of times. */
  while ( j != order )
  {
-PFN_ORDER(pg) = --j;
-page_list_add_tail(pg, (node, zone, j));
+unsigned int first_dirty;
+
+if ( first_dirty_pg && ((pg + (1 << j)) > first_dirty_pg) )


Despite the various examples of doing it this way, please at least use 1u.


+{
+if ( pg < first_dirty_pg )
+first_dirty = (first_dirty_pg - pg) / sizeof(*pg);


Pointer subtraction already includes the involved division. 



Yes, this was a mistake.


Otoh I wonder
if you couldn't get away without pointer comparison/subtraction here
altogether.



Without comparison I can only assume that first_dirty is zero (i.e. the 
whole buddy is potentially dirty). Is there something else I could do?






@@ -849,13 +880,22 @@ static int reserve_offlined_page(struct page_info *head)
  {
  unsigned int node = phys_to_nid(page_to_maddr(head));
  int zone = page_to_zone(head), i, head_order = PFN_ORDER(head), count = 0;
-struct page_info *cur_head;
+struct page_info *cur_head, *first_dirty_pg = NULL;
  int cur_order;
  
  ASSERT(spin_is_locked(_lock));
  
  cur_head = head;
  
+/*

+ * We may break the buddy so let's mark the head as clean. Then, when
+ * merging chunks back into the heap, we will see whether the chunk has
+ * unscrubbed pages and set its first_dirty properly.
+ */
+if (head->u.free.first_dirty != INVALID_DIRTY_IDX)


Coding style.


@@ -892,8 +934,25 @@ static int reserve_offlined_page(struct page_info *head)
  {
  merge:
  /* We don't consider merging outside the head_order. */
-page_list_add_tail(cur_head, (node, zone, cur_order));
-PFN_ORDER(cur_head) = cur_order;
+
+/* See if any of the pages indeed need scrubbing. */
+if ( first_dirty_pg && (cur_head + (1 << cur_order) > 
first_dirty_pg) )
+{
+if ( cur_head < first_dirty_pg )
+i = (first_dirty_pg - cur_head) / sizeof(*cur_head);


I assume the same comment as above applies here.


+else
+i = 0;
+
+for ( ; i < (1 << cur_order); i++ )
+if ( test_bit(_PGC_need_scrub,
+  _head[i].count_info) )
+{
+first_dirty = i;
+break;
+}


Perhaps worth having ASSERT(first_dirty != INVALID_DIRTY_IDX) here? Or are
there cases where ->u.free.first_dirty of a page may be wrong?



When we merge in free_heap_pages we don't clear first_dirty of the 
successor buddy (at some point I did have this done but you questioned 
whether it was needed and I dropped it).






@@ -977,35 +1090,53 @@ static void free_heap_pages(
  
  if ( (page_to_mfn(pg) & mask) )

  {
+struct page_info *predecessor = pg - mask;
+
  /* Merge with predecessor block? */
-if ( !mfn_valid(_mfn(page_to_mfn(pg-mask))) ||
- !page_state_is(pg-mask, free) ||
- (PFN_ORDER(pg-mask) != order) ||
- (phys_to_nid(page_to_maddr(pg-mask)) != node) )
+if ( !mfn_valid(_mfn(page_to_mfn(predecessor))) ||
+ !page_state_is(predecessor, free) ||
+ (PFN_ORDER(predecessor) != order) ||
+ (phys_to_nid(page_to_maddr(predecessor)) != node) )
  break;
-pg -= mask;
-page_list_del(pg, (node, zone, order));
+
+page_list_del(predecessor, (node, zone, order));
+
+if ( predecessor->u.free.first_dirty != INVALID_DIRTY_IDX )
+need_scrub = true;


I'm afraid I continue to be confused by this: Why does need_scrub depend on
the state of pages not being the subject of the current free operation? I
realize that at this point in the series the path can't be taken yet, but
won't later patches need to rip it out or change it anyway, in which case it
would be better to introduce the then correct check (if any) only there?



Right, at this point we indeed will never have the 'if' evaluate to true 
since heap is always clean. And when 

[Xen-devel] [linux-4.9 test] 112117: regressions - FAIL

2017-07-22 Thread osstest service owner
flight 112117 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112117/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
111883

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 111843
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 111843
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 111883
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 111883
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 111883
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore   fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore   fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linuxc03917de04aa68017a737e90ea01338d991eaff5
baseline version:
 linuxf0cd77ded5127168b1b83ca2f366ee17e9c0586f

Last test of basis   111883  2017-07-16 11:10:00 Z6 days
Testing same since   112086  2017-07-21 06:22:54 Z1 days2 attempts


People who touched revisions under test:
  "Eric W. Biederman" 
  Adam Borowski 
  Alban Browaeys 
  Alexei 

[Xen-devel] [linux-linus test] 112114: regressions - trouble: broken/fail/pass

2017-07-22 Thread osstest service owner
flight 112114 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112114/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. 
vs. 110515
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 110515
 test-amd64-amd64-xl-pvh-intel  7 xen-bootfail REGR. vs. 110515
 test-amd64-amd64-qemuu-nested-intel  7 xen-boot  fail REGR. vs. 110515
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 110515
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 110515
 test-amd64-amd64-xl-qemut-debianhvm-amd64  7 xen-bootfail REGR. vs. 110515
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 110515
 test-amd64-amd64-libvirt-pair 21 guest-start/debian  fail REGR. vs. 110515
 test-amd64-amd64-pair21 guest-start/debian   fail REGR. vs. 110515
 test-amd64-i386-libvirt-pair 21 guest-start/debian   fail REGR. vs. 110515
 test-amd64-i386-pair 21 guest-start/debian   fail REGR. vs. 110515
 test-amd64-i386-xl-qemut-debianhvm-amd64 15 guest-saverestore.2 fail REGR. vs. 
110515

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   4 host-install(4)   broken blocked in 110515
 test-amd64-i386-xl-qemut-win7-amd64 18 guest-start/win.repeat fail blocked in 
110515
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 110515
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 110515
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 110515
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 110515
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 110515
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 110515
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 110515
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 110515
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore   fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore   fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install  

[Xen-devel] VPMU interrupt unreliability

2017-07-22 Thread Kyle Huey
Last year I reported[0] seeing occasional instability in performance
counter values when running rr[1], which depends on completely
deterministic counts of retired conditional branches of userspace
programs.

I recently identified the cause of this problem.  Xen's VPMU code
contains a workaround for an alleged Nehalem bug that was added in
2010[2].  Supposedly if a hardware performance counter reaches 0
exactly during a PMI another PMI is generated potentially causing an
endless loop.  The workaround is to set the counter to 1.  In 2013 the
original bug was believed to affect more than just Nehalem and the
workaround was enabled for all family 6 CPUs.[3]  This workaround
unfortunately disturbs the counter value in non-deterministic ways
(since the value the counter has in the irq handler depends on
interrupt latency), which is fatal to rr.

I've verified that the discrepancies we see in the counted values are
entirely accounted for by the number of times the workaround is used
in any given run.  Furthermore, patching Xen not to use this
workaround makes the discrepancies in the counts vanish.  I've added
code[4] to rr that reliably detects this problem from guest userspace.

Even with the workaround removed in Xen I see some additional issues
(but not disturbed counter values) with the PMI, such as interrupts
occasionally not being delivered to the guest.  I haven't done much
work to track these down, but my working theory is that interrupts
that "skid" out of the guest that requested them and into Xen itself
or perhaps even another guest are not being delivered.

Our current plan is to stop depending on the PMI during rr's recording
phase (which we use for timeslicing tracees primarily because it's
convenient) to enable producing correct recordings in Xen guests.
Accurate replay will not be possible under virtualization because of
the PMI issues; that will require transferring the recording to
another machine.  But that will be sufficient to enable the use cases
we care about (e.g. record an automated process on a cloud computing
provider and have an engineer download and replay a failing recording
later to debug it).

I can think of several possible ways to fix the overcount problem, including:
1. Restricting the workaround to apply only to older CPUs and not all
family 6 Intel CPUs forever.
2. Intercepting MSR loads for counters that have the workaround
applied and giving the guest the correct counter value.
3. Or perhaps even changing the workaround to disable the PMI on that
counter until the guest acks via GLOBAL_OVF_CTRL, assuming that works
on the relevant hardware.

Since I don't have the relevant hardware to test changes to this
workaround on and rr can avoid these bugs through other means I don't
expect to work on this myself, but I wanted to apprise you of what
we've learned.

- Kyle

[0] https://lists.xen.org/archives/html/xen-devel/2016-10/msg01288.html
[1] http://rr-project.org/
[2] 
https://xenbits.xen.org/gitweb/?p=xen.git;a=blobdiff;f=xen/arch/x86/hvm/vmx/vpmu_core2.c;h=44aa8e3c47fc02e401f5c382d89b97eef0cd2019;hp=ce4fd2d43e04db5e9b042344dd294cfa11e1f405;hb=3ed6a063d2a5f6197306b030e8c27c36d5f31aa1;hpb=566f83823996cf9c95f9a0562488f6b1215a1052
[3] 
https://xenbits.xen.org/gitweb/?p=xen.git;a=blobdiff;f=xen/arch/x86/hvm/vmx/vpmu_core2.c;h=15b2036c8db1e56d8865ee34c363e7f23aa75e33;hp=9f152b48c26dfeedb6f94189a5fe4a5f7a772d83;hb=75a92f551ade530ebab73a0c3d4934dfb28149b5;hpb=71fc4da1306cec55a42787310b01a1cb52489abc
[4] See 
https://github.com/mozilla/rr/blob/a5d23728cd7d01c6be0c79852af26c68160d4405/src/PerfCounters.cc#L313,
which sets up a counter and then does some pointless math in a loop to
reach exactly 500 conditional branches.  Xen will report 501 branches
because of this bug.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-3.18 baseline-only test] 71731: trouble: blocked/broken

2017-07-22 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71731 linux-3.18 real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71731/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops 4 host-install(4) broken REGR. vs. 71696
 build-armhf   4 host-install(4) broken REGR. vs. 71696
 build-armhf-xsm   4 host-install(4) broken REGR. vs. 71696
 build-amd64-pvops 4 host-install(4) broken REGR. vs. 71696
 build-amd64   4 host-install(4) broken REGR. vs. 71696
 build-i3864 host-install(4) broken REGR. vs. 71696
 build-amd64-xsm   4 host-install(4) broken REGR. vs. 71696
 build-i386-pvops  4 host-install(4) broken REGR. vs. 71696
 build-i386-xsm4 host-install(4) broken REGR. vs. 71696

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-xl-midway1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win10-i386  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-examine  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-examine   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 build-i386-rumprun1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-amd64-rumprun   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)   blocked  n/a
 

[Xen-devel] [qemu-mainline test] 112100: regressions - FAIL

2017-07-22 Thread osstest service owner
flight 112100 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112100/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm6 xen-buildfail REGR. vs. 111765
 build-i3866 xen-buildfail REGR. vs. 111765
 build-armhf-xsm   6 xen-buildfail REGR. vs. 111765
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 111765
 build-armhf   6 xen-buildfail REGR. vs. 111765

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 111765
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu91939262ffcd3c85ea6a4793d3029326eea1d649
baseline version:
 qemuu31fe1c414501047cbb91b695bdccc0068496dcf6

Last test of basis   111765  2017-07-13 10:20:16 Z9 days
Failing since111790  2017-07-14 04:20:46 Z8 days   11 attempts
Testing same since   112100  2017-07-21 16:42:38 Z1 days1 attempts


People who touched revisions under test:
  Alex Bennée 
  Alex Williamson 
  Alexander Graf 
  Alexey Kardashevskiy 
  Alistair Francis 

[Xen-devel] [qemu-mainline bisection] complete build-i386-xsm

2017-07-22 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job build-i386-xsm
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  5ba3d7564593c55292056ef5af84d50b55ebcf0e
  Bug not present: 759235653de427e4e7b62d8e6fb1ef9cb68bac7d
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/112152/


  commit 5ba3d7564593c55292056ef5af84d50b55ebcf0e
  Author: Igor Druzhinin 
  Date:   Mon Jul 10 23:40:02 2017 +0100
  
  xen/mapcache: introduce xen_replace_cache_entry()
  
  This new call is trying to update a requested map cache entry
  according to the changes in the physmap. The call is searching
  for the entry, unmaps it and maps again at the same place using
  a new guest address. If the mapping is dummy this call will
  make it real.
  
  This function makes use of a new xenforeignmemory_map2() call
  with an extended interface that was recently introduced in
  libxenforeignmemory [1].
  
  [1] https://www.mail-archive.com/xen-devel@lists.xen.org/msg113007.html
  
  Signed-off-by: Igor Druzhinin 
  Reviewed-by: Paul Durrant 
  Reviewed-by: Stefano Stabellini 
  Signed-off-by: Stefano Stabellini 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-i386-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/qemu-mainline/build-i386-xsm.xen-build 
--summary-out=tmp/112152.bisection-summary --basis-template=111765 
--blessings=real,real-bisect qemu-mainline build-i386-xsm xen-build
Searching for failure / basis pass:
 112100 fail [host=nobling0] / 112011 [host=huxelrebe0] 111986 [host=nocera0] 
111963 [host=italia1] 111926 [host=huxelrebe0] 111889 ok.
Failure / basis pass flights: 112100 / 111889
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 8051789e982499050680a26febeada7467e18a8d 
91939262ffcd3c85ea6a4793d3029326eea1d649 
d535d8922f571502252deaf607e82e7475cd1728
Basis pass 8051789e982499050680a26febeada7467e18a8d 
4871b51b9241b10f4fd8e04bbb21577886795e25 
614a14736e33fb84872eb00f08799ebbc73a96c6
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/qemu-xen-traditional.git#8051789e982499050680a26febeada7467e18a8d-8051789e982499050680a26febeada7467e18a8d
 
git://git.qemu.org/qemu.git#4871b51b9241b10f4fd8e04bbb21577886795e25-91939262ffcd3c85ea6a4793d3029326eea1d649
 
git://xenbits.xen.org/xen.git#614a14736e33fb84872eb00f08799ebbc73a96c6-d535d8922f571502252deaf607e82e7475cd1728
Loaded 7998 nodes in revision graph
Searching for test results:
 111817 [host=italia0]
 111848 [host=italia1]
 111889 pass 8051789e982499050680a26febeada7467e18a8d 
4871b51b9241b10f4fd8e04bbb21577886795e25 
614a14736e33fb84872eb00f08799ebbc73a96c6
 111926 [host=huxelrebe0]
 111986 [host=nocera0]
 111963 [host=italia1]
 112011 [host=huxelrebe0]
 112041 fail 8051789e982499050680a26febeada7467e18a8d 
d4e59218ab80e86015753782fb5378767a51ccd0 
d535d8922f571502252deaf607e82e7475cd1728
 112072 fail 8051789e982499050680a26febeada7467e18a8d 
25d0233c1ac6cd14a15fcc834f1de3b179037b1d 
d535d8922f571502252deaf607e82e7475cd1728
 112100 fail 8051789e982499050680a26febeada7467e18a8d 
91939262ffcd3c85ea6a4793d3029326eea1d649 
d535d8922f571502252deaf607e82e7475cd1728
 112134 pass 8051789e982499050680a26febeada7467e18a8d 
f9dada2baabb639feb988b3a564df7a06d214e18 
b9cd216f74411a699c3e5ce3d25a375af37f096c
 112135 pass 8051789e982499050680a26febeada7467e18a8d 
6632f6ff96f0537fc34cdc00c760656fc62e23c5 
2b8a8a03f56e21381c7dd560b081002d357639e2
 112137 pass 8051789e982499050680a26febeada7467e18a8d 
1f244ebbba650b82828b6139377d6199fe648d6b 
2b8a8a03f56e21381c7dd560b081002d357639e2
 112138 pass 8051789e982499050680a26febeada7467e18a8d 
22d716c28e95e4640e2cd80553eb3f662db3fd50 
d535d8922f571502252deaf607e82e7475cd1728
 112139 pass 8051789e982499050680a26febeada7467e18a8d 
d962c6266c5361f62f16b3c7b1c5b587502eaf77 
d535d8922f571502252deaf607e82e7475cd1728
 112141 fail 8051789e982499050680a26febeada7467e18a8d 
331b5189d756d431b1d18ae7097527ba3d3ea809 
b9cd216f74411a699c3e5ce3d25a375af37f096c
 112142 pass 8051789e982499050680a26febeada7467e18a8d 
04d6da4ff6084a3cb1b7a981769d9aa17e469348 
b9cd216f74411a699c3e5ce3d25a375af37f096c
 112128 pass 8051789e982499050680a26febeada7467e18a8d 
4871b51b9241b10f4fd8e04bbb21577886795e25 

[Xen-devel] [linux-3.18 test] 112102: tolerable FAIL - PUSHED

2017-07-22 Thread osstest service owner
flight 112102 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112102/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale  4 host-install(4) broken in 112085 pass in 112102
 test-armhf-armhf-libvirt-raw  7 xen-boot fail in 112085 pass in 112102
 test-amd64-i386-xl-qemuu-debianhvm-amd64 16 guest-localmigrate/x10 fail in 
112085 pass in 112102
 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat 
fail pass in 112085
 test-amd64-i386-qemut-rhel6hvm-intel 12 guest-start/redhat.repeat fail pass in 
112085

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-examine  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop  fail in 112085 like 111893
 test-amd64-i386-freebsd10-amd64 19 guest-start/freebsd.repeat fail in 112085 
like 111920
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop   fail in 112085 like 111920
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 111867
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 111893
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 111893
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 111920
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 111920
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 111920
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 111920
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 111920
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore   fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore   fail never pass
 build-arm64-pvops 6 kernel-build fail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxdd8b674caeef9381345a6369fba29d425ff433f3
baseline version:
 linux

[Xen-devel] [xen-unstable test] 112098: regressions - FAIL

2017-07-22 Thread osstest service owner
flight 112098 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112098/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
112004

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop   fail blocked in 112004
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 112004
 test-amd64-amd64-xl-qemuu-win7-amd64 18 guest-start/win.repeat fail like 112004
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 112004
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 112004
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 112004
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 112004
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore   fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore   fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass

version targeted for testing:
 xen  73771b89fd9d89a23d5c7b760056fdaf94946be9
baseline version:
 xen  d535d8922f571502252deaf607e82e7475cd1728

Last test of basis   112004  2017-07-19 06:51:03 Z3 days
Failing since112033  2017-07-20 02:24:27 Z2 days3 attempts
Testing same since   112098  2017-07-21 14:48:20 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Felix Schmoll 
  Ian Jackson 
  Owen 

[Xen-devel] [linux-linus bisection] complete test-amd64-amd64-libvirt

2017-07-22 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt
testid guest-saverestore.2

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib git://git.sv.gnu.org/gnulib.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  921edf312a6a20be16cf2b60e0dec3dce35e5cb9
  Bug not present: 32c1431eea4881a6b17bd7c639315010aeefa452
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/112133/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-amd64-libvirt.guest-saverestore.2.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-amd64-libvirt.guest-saverestore.2
 --summary-out=tmp/112133.bisection-summary --basis-template=110515 
--blessings=real,real-bisect linux-linus test-amd64-amd64-libvirt 
guest-saverestore.2
Searching for failure / basis pass:
 112083 fail [host=pinot1] / 111363 [host=nocera1] 111332 [host=chardonnay0] 
111280 [host=pinot0] 111222 [host=rimava1] 83 [host=godello0] 48 
[host=huxelrebe0] 24 [host=merlot0] 111081 [host=rimava0] 110984 
[host=godello1] 110950 [host=baroque1] 110908 [host=huxelrebe1] 110560 
[host=nobling1] 110547 [host=elbling1] 110536 [host=elbling0] 110515 
[host=chardonnay1] 110486 [host=nocera1] 110464 [host=rimava1] 110427 
[host=godello0] 110399 [host=baroque0] 110380 [host=nobling0] 110346 ok.
Failure / basis pass flights: 112083 / 110346
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib git://git.sv.gnu.org/gnulib.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 9af764e86aef7dfb0191a9561bf1d1abf941da05 
ce4ee4cbb596a9d7de2786cf8c48cf62a4edede7 
7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0 
921edf312a6a20be16cf2b60e0dec3dce35e5cb9 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
414d069b38ab114b89085e44989bf57604ea86d7 
d535d8922f571502252deaf607e82e7475cd1728
Basis pass 3596b1ddf912418f70c9eaa07d460aacf574bbfd 
da830b5146cb553ac2a4bcfe76caeb57bda24cc3 
7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0 
32c1431eea4881a6b17bd7c639315010aeefa452 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
e97832ec6b2a7ddd48b8e6d1d848ffdfee6a31c7 
aeef64107afca9c6c0428b2cb26a3ba599b3ed75
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/libvirt.git#3596b1ddf912418f70c9eaa07d460aacf574bbfd-9af764e86aef7dfb0191a9561bf1d1abf941da05
 
git://git.sv.gnu.org/gnulib.git#da830b5146cb553ac2a4bcfe76caeb57bda24cc3-ce4ee4cbb596a9d7de2786cf8c48cf62a4edede7
 
https://gitlab.com/keycodemap/keycodemapdb.git#7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0-7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0
 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#32c1431eea4881a6b17bd7c639315010aeefa452-921edf312a6a20be16cf2b60e0dec3dce35e5cb9
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#8051789e982499050680a26febeada7467e18a8d-8051789e982499050680a26febeada7467e18a8d
 
git://xenbits.xen.org/qemu-xen.git#e97832ec6b2a7ddd48b8e6d1d848ffdfee6a31c7-414d069b38ab114b89085e44989bf57604ea86d7
 
git://xenbits.xen.org/xen.git#aeef64107afca9c6c0428b2cb26a3ba599b3ed75-d535d8922f571502252deaf607e82e7475cd1728
adhoc-revtuple-generator: tree discontiguous: linux-2.6
Loaded 4007 nodes in revision graph
Searching for test results:
 110236 [host=huxelrebe0]
 110346 pass 3596b1ddf912418f70c9eaa07d460aacf574bbfd 
da830b5146cb553ac2a4bcfe76caeb57bda24cc3 
7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0 
32c1431eea4881a6b17bd7c639315010aeefa452 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
e97832ec6b2a7ddd48b8e6d1d848ffdfee6a31c7 
aeef64107afca9c6c0428b2cb26a3ba599b3ed75
 110288 [host=italia1]
 110380 [host=nobling0]
 110399 [host=baroque0]
 110427 [host=godello0]
 

Re: [Xen-devel] [GIT PULL] xen: features and fixes for 4.13-rc2

2017-07-22 Thread Juergen Gross
On 21/07/17 22:57, Linus Torvalds wrote:
> On Fri, Jul 21, 2017 at 3:17 AM, Juergen Gross  wrote:
>>  drivers/xen/pvcalls-back.c | 1236 
>> 
> 
> This really doesn't look like a fix.
> 
> The merge window is over.
> 
> So I'm not pulling this without way more explanations of why I should.

Hmm, okay. I estimated the risk of adding this new driver to be rather
low, as it won't be used other than in development systems right now.

In case you don't want to pull it I'm fine with sending you another
pull request without this driver.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-win7-amd64

2017-07-22 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-win7-amd64
testid windows-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  04bf2526ce87f21b32c9acba1c5518708c243ad0
  Bug not present: 1a29cc8f5ebd657e159dbe4be340102595846d42
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/112125/


  commit 04bf2526ce87f21b32c9acba1c5518708c243ad0
  Author: Prasad J Pandit 
  Date:   Wed Jul 12 18:08:40 2017 +0530
  
  exec: use qemu_ram_ptr_length to access guest ram
  
  When accessing guest's ram block during DMA operation, use
  'qemu_ram_ptr_length' to get ram block pointer. It ensures
  that DMA operation of given length is possible; And avoids
  any OOB memory access situations.
  
  Reported-by: Alex 
  Signed-off-by: Prasad J Pandit 
  Message-Id: <20170712123840.29328-1-ppan...@redhat.com>
  Signed-off-by: Paolo Bonzini 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.windows-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.windows-install
 --summary-out=tmp/112125.bisection-summary --basis-template=111765 
--blessings=real,real-bisect qemu-mainline test-amd64-amd64-xl-qemuu-win7-amd64 
windows-install
Searching for failure / basis pass:
 112072 fail [host=nocera0] / 111790 ok.
Failure / basis pass flights: 112072 / 111790
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git
Latest b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
25d0233c1ac6cd14a15fcc834f1de3b179037b1d 
d535d8922f571502252deaf607e82e7475cd1728
Basis pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
49bcce4b9c11759678fd223aefb48691c4959d4f 
614a14736e33fb84872eb00f08799ebbc73a96c6
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/linux-pvops.git#b65f2f457c49b2cfd7967c34b7a0b04c25587f13-b65f2f457c49b2cfd7967c34b7a0b04c25587f13
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#8051789e982499050680a26febeada7467e18a8d-8051789e982499050680a26febeada7467e18a8d
 
git://git.qemu.org/qemu.git#49bcce4b9c11759678fd223aefb48691c4959d4f-25d0233c1ac6cd14a15fcc834f1de3b179037b1d
 
git://xenbits.xen.org/xen.git#614a14736e33fb84872eb00f08799ebbc73a96c6-d535d8922f571502252deaf607e82e7475cd1728
Loaded 9991 nodes in revision graph
Searching for test results:
 111815 pass irrelevant
 111790 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
49bcce4b9c11759678fd223aefb48691c4959d4f 
614a14736e33fb84872eb00f08799ebbc73a96c6
 111821 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
49bcce4b9c11759678fd223aefb48691c4959d4f 
614a14736e33fb84872eb00f08799ebbc73a96c6
 111817 fail irrelevant
 111848 fail irrelevant
 111889 fail irrelevant
 111926 fail irrelevant
 111986 fail irrelevant
 111963 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
ca4e667dbf431d4a2a5a619cde79d30dd2ac3eb2 
2b8a8a03f56e21381c7dd560b081002d357639e2
 112106 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
25d0233c1ac6cd14a15fcc834f1de3b179037b1d 
d535d8922f571502252deaf607e82e7475cd1728
 112042 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
49bcce4b9c11759678fd223aefb48691c4959d4f 
614a14736e33fb84872eb00f08799ebbc73a96c6
 112054 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8051789e982499050680a26febeada7467e18a8d 
acbaa0f4fd0491d222b718688244e629aa188b3c 

[Xen-devel] [distros-debian-stretch test] 71730: tolerable trouble: blocked/broken/fail/pass

2017-07-22 Thread Platform Team regression test user
flight 71730 distros-debian-stretch real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71730/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-armhf-stretch-netboot-pygrub  1 build-check(1)blocked n/a
 build-arm64-pvops 2 hosts-allocate   broken like 71693
 build-arm64   2 hosts-allocate   broken like 71693
 build-arm64-pvops 3 capture-logs broken like 71693
 build-arm64   3 capture-logs broken like 71693
 test-amd64-amd64-amd64-stretch-netboot-pvgrub 10 debian-di-install fail like 
71693
 test-amd64-i386-amd64-stretch-netboot-pygrub 10 debian-di-install fail like 
71693
 test-amd64-amd64-i386-stretch-netboot-pygrub 10 debian-di-install fail like 
71693
 test-amd64-i386-i386-stretch-netboot-pvgrub 10 debian-di-install fail like 
71693
 test-armhf-armhf-armhf-stretch-netboot-pygrub 10 debian-di-install fail like 
71693

baseline version:
 flight   71693

jobs:
 build-amd64  pass
 build-arm64  broken  
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-arm64-pvopsbroken  
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-stretch-netboot-pvgrubfail
 test-amd64-i386-i386-stretch-netboot-pvgrub  fail
 test-amd64-i386-amd64-stretch-netboot-pygrub fail
 test-arm64-arm64-armhf-stretch-netboot-pygrubblocked 
 test-armhf-armhf-armhf-stretch-netboot-pygrubfail
 test-amd64-amd64-i386-stretch-netboot-pygrub fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-next test] 112090: regressions - FAIL

2017-07-22 Thread osstest service owner
flight 112090 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112090/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot  fail REGR. vs. 112049
 test-amd64-i386-xl-qemuu-win10-i386  7 xen-boot  fail REGR. vs. 112049
 test-amd64-i386-libvirt   7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 112049
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 112049
 test-amd64-i386-xl-qemut-win10-i386  7 xen-boot  fail REGR. vs. 112049
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 112049
 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 112049
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 112049
 test-amd64-i386-examine   7 reboot   fail REGR. vs. 112049
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 112049
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 112049
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
112049
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-boot  fail REGR. vs. 112049
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 112049
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 112049
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 112049
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 112049
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 112049
 test-amd64-amd64-examine  7 reboot   fail REGR. vs. 112049
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-boot  fail REGR. vs. 112049
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 112049
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 112049
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
112049
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
112049
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-boot  fail REGR. vs. 112049

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 12 guest-start  fail REGR. vs. 112049

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 18 guest-start/win.repeat fail blocked in 
112049
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 112049
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 112049
 test-amd64-amd64-libvirt-pair 21 guest-start/debian   fail like 112049
 test-amd64-amd64-amd64-pvgrub  7 xen-boot fail like 112049
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 112049
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 112049
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 

[Xen-devel] [ovmf baseline-only test] 71729: all pass

2017-07-22 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71729 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71729/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 1683ecec41a7c944783c51efa75375f1e0a71d08
baseline version:
 ovmf 79aac4dd756bb2809cdcb74f7d2ae8a630457c99

Last test of basis71705  2017-07-20 08:18:01 Z1 days
Testing same since71729  2017-07-21 16:59:51 Z0 days1 attempts


People who touched revisions under test:
  Star Zeng 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 1683ecec41a7c944783c51efa75375f1e0a71d08
Author: Star Zeng 
Date:   Wed Jul 19 18:16:31 2017 +0800

MdePkg UsbFunctionIo.h: Update comments for GetDeviceInfo return status

UEFI spec 2.6 errata B update Status Codes Returned table of the
EFI_USBFN_IO_PROTOCOL.GetDeviceInfo function as follows:

1. Update EFI_INVALID_PARAMETER description:
Original text:
A parameter is invalid.
New text:
One or more of the following conditions is TRUE:
BufferSize is NULL.
*BufferSize is not 0 and Buffer is NULL.
Id in invalid.

2. Update EFI_BUFFER_TOO_SMALL description:
Original text:
Supplied buffer isn’t large enough to hold the request string.
New text:
The buffer is too small to hold the buffer.
*BufferSize has been updated with the size needed to hold the
request string.

Cc: Liming Gao 
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Star Zeng 
Reviewed-by: Liming Gao 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel