[Xen-devel] [xen-4.5-testing test] 63378: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63378 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63378/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
63358
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
63358

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  6 xen-boot fail   like 63358
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 63358

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  423d2cd814e8460d5ea8bd191a770f3c48b3947c
baseline version:
 xen  d3063bb2b118da2e84707f26a8d173c85a5d8f05

Last test of basis63358  2015-10-29 13:11:46 Z2 days
Testing same since63378  2015-10-30 14:14:34 Z1 days1 attempts


People who touched revisions under test:
  Ian Jackson 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass
 test-amd64-amd64-rumpuserxen-amd64   pass
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-xl-qemut-win7-amd64  

[Xen-devel] [xen-4.3-testing test] 63381: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63381 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63381/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs. 
63212

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 63212

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-armhf-armhf-xl-vhd   6 xen-boot fail   never pass
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail  never pass
 test-armhf-armhf-xl   6 xen-boot fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-armhf-armhf-libvirt-qcow2  6 xen-boot fail never pass
 test-armhf-armhf-libvirt  6 xen-boot fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck  6 xen-boot fail never pass
 test-armhf-armhf-xl-credit2   6 xen-boot fail   never pass
 test-armhf-armhf-libvirt-raw  6 xen-boot fail   never pass
 test-amd64-i386-migrupgrade 21 guest-migrate/src_host/dst_host fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/checkfail never pass

version targeted for testing:
 xen  e875e0e5fcc5912f71422b53674a97e5c0ae77be
baseline version:
 xen  85ca813ec23c5a60680e4a13777dad530065902b

Last test of basis63212  2015-10-22 10:03:01 Z   10 days
Failing since 63360  2015-10-29 13:39:04 Z2 days2 attempts
Testing same since63381  2015-10-30 18:44:54 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  fail
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-rumpuserxen-amd64   blocked
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-xl-qemut-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 

[Xen-devel] [linux-mingo-tip-master test] 63385: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63385 linux-mingo-tip-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63385/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops 5 kernel-build  fail REGR. vs. 60684
 build-i386-pvops  5 kernel-build  fail REGR. vs. 60684

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 build-check(1)   blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 linux76e261fef11c894919e3ceba2686596b9fa78050
baseline version:
 linux69f75ebe3b1d1e636c4ce0a0ee248edacc69cbe0

Last test of basis60684  2015-08-13 04:21:46 Z   80 days
Failing since 60712  2015-08-15 18:33:48 

[Xen-devel] [xen-4.6-testing test] 63379: trouble: broken/fail/pass

2015-11-01 Thread osstest service owner
flight 63379 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63379/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu  3 host-install(3)broken REGR. vs. 63359

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 15 guest-localmigrate.2 
fail blocked in 63359

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  40d7a7454835c2f7c639c78f6c09e7b6f0e4a4e2
baseline version:
 xen  bdc9fdf9d468cb94ca0fbed1b969c20bf173dc9b

Last test of basis63359  2015-10-29 13:14:25 Z2 days
Testing same since63379  2015-10-30 17:09:57 Z1 days1 attempts


People who touched revisions under test:
  Ian Jackson 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 

[Xen-devel] [linux-3.10 test] 63391: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63391 linux-3.10 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63391/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops 5 kernel-build  fail REGR. vs. 62642

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-xsm 18 guest-start/debian.repeat   fail pass in 63366

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 16 guest-localmigrate/x10 
fail blocked in 62642
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 15 guest-localmigrate.2 
fail in 63366 like 62642
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 62642
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 62642

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass

version targeted for testing:
 linuxd17332ebfb5f2010ae5d3332a52df361f28ae4a8
baseline version:
 linuxf5552cd830e58c46dffae3617b3ce0c839771981

Last test of basis62642  2015-10-03 17:59:45 Z   29 days
Failing since 63224  2015-10-22 22:20:05 Z9 days8 attempts
Testing same since63332  2015-10-27 12:23:40 Z5 days4 attempts


People who touched revisions under test:
  "Eric W. Biederman" 
  Aaron Conole 
  Adam Radford 
  Al Viro 
  Alexander Couzens 
  Alexey Klimov 
  Andi Kleen 
  Andreas Schwab 
  Andrew Morton 
  Ard Biesheuvel 
  Arnaldo Carvalho de Melo 
  Ben Hutchings 
  Charles Keepax 
  Christoph Biedl 
  Christoph Hellwig 
  cov...@ccs.covici.com 
  Daniel Vetter 
  Daniel Vetter 
  Dave Kleikamp 
  David S. Miller 
  David Vrabel 
  David Woodhouse 
  David Woodhouse 
  Ding Tianhong 
  dingtianhong 
  Dirk Mueller 
  Dirk Müller 
  Doug Ledford 
  Eric Dumazet 
  Eric W. Biederman 
  Geert Uytterhoeven 
  Greg Kroah-Hartman 
  Guenter Roeck 
  Guillaume Nault 
  H. Peter Anvin 
  Herbert Xu 
  Ian Abbott 
  Ilya Dryomov 
  Ingo Molnar 
  James Bottomley 
  James Chapman 
  James Hogan 
  Jan Kara 
  Jann Horn 
  Jarkko Nikula 
  Jeff Mahoney 
  Jiri Slaby 
  Joe Perches 
  Joe Stringer 
  Joe Thornber 
  Johan Hovold 
  John Covici 
  Julian Anastasov 
  Kees Cook 
  Linus Torvalds 
  Liu.Zhao 
  Mark Brown 
  Mark Salyzyn 
  Mathias Nyman 
  Mel Gorman 
  Michael Ellerman 
  Michal Hocko 
  Michel Stam 
  Mike Marciniszyn 
  Mike Snitzer 
  Mikulas Patocka 
  Namhyung Kim 
  NeilBrown 
  Nicolas Pitre 
  Nikolay 

[Xen-devel] [xen-4.4-testing test] 63382: tolerable FAIL - PUSHED

2015-11-01 Thread osstest service owner
flight 63382 xen-4.4-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63382/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-multivcpu 16 guest-start/debian.repeatfail  like 63097
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 63097

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 11 guest-start  fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/checkfail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  73b70e3c5d59e63126c890068ee0cbf8a2a3b640
baseline version:
 xen  e321898a39222ad1feef352d65f71cef362b4a16

Last test of basis63159  2015-10-21 16:01:57 Z   10 days
Failing since 63361  2015-10-29 13:39:05 Z3 days2 attempts
Testing same since63382  2015-10-30 20:15:41 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 

jobs:
 build-amd64-xend pass
 build-i386-xend  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64   

[Xen-devel] [qemu-mainline test] 63384: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63384 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63384/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm  16 guest-start/debian.repeat fail REGR. vs. 63363

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt-xsm  6 xen-boot  fail REGR. vs. 63363
 test-armhf-armhf-xl-rtds 11 guest-start  fail   like 63363

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-libvirt-vhd  9 debian-di-installfail   never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass

version targeted for testing:
 qemuu3a958f559ecd0511583d27b10011fa7f3cf79b63
baseline version:
 qemuu7bc8e0c967a4ef77657174d28af775691e18b4ce

Last test of basis63363  2015-10-29 15:42:35 Z3 days
Testing same since63384  2015-10-30 22:10:27 Z1 days1 attempts


People who touched revisions under test:
  Christian Borntraeger 
  Cornelia Huck 
  Denis V. Lunev 
  Dr. David Alan Gilbert 
  James Hogan 
  Kevin Wolf 
  Leon Alrae 
  Markus Armbruster 
  Paolo Bonzini 
  Pavel Butsykin 
  Peter Maydell 
  Sai Pavan Boddu 
  Sai Pavan Boddu 
  Stefan Hajnoczi 
  Yongbok Kim 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 

[Xen-devel] [ovmf baseline-only test] 38237: all pass

2015-11-01 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 38237 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/38237/

Perfect :-)
All tests in this flight passed
version targeted for testing:
 ovmf df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
baseline version:
 ovmf 843f8ca01bc195cd077f13512fe285e8db9a3984

Last test of basis38233  2015-10-31 10:00:14 Z1 days
Testing same since38237  2015-11-01 21:20:34 Z0 days1 attempts


People who touched revisions under test:
  Laszlo Ersek 
  Michael Kinney 
  Nagaraj Hegde 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
Author: Michael Kinney 
Date:   Fri Oct 30 17:53:53 2015 +

SourceLevelDebugPkg: DebugAgent: Set Local APIC SoftwareEnable

Update DebugAgent to make sure the Local APIC SoftwareEnable bit is set
before using the Local APIC Timer.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney 
Reviewed-by: Hao Wu 
Reviewed-by: Jeff Fan 

git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18712 
6f19259b-4bc3-4df7-8a09-765794883524

commit 14e4ca25c6199fa29bda7066f31d919197840664
Author: Michael Kinney 
Date:   Fri Oct 30 17:53:31 2015 +

UefiCpuPkg: LocalApicLib: Add API to set SoftwareEnable bit

The LocalApicLib does not provide a function to manage the state of the
Local APIC SoftwareEnable bit in the Spurious Vector register.  There
are cases where this bit needs to be managed without side effects to.
other Local APIC registers.  One use case is in the DebugAgent in the
SourceLevelDebugPkg.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney 
Reviewed-by: Hao Wu 
Reviewed-by: Jeff Fan 

git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18711 
6f19259b-4bc3-4df7-8a09-765794883524

commit 0d4c1db81aab86963536deb8253f35546c4398ea
Author: Michael Kinney 
Date:   Fri Oct 30 17:32:27 2015 +

UefiCpuPkg: CpuDxe: Update GDT to be consistent with DxeIplPeim

The PiSmmCpuDxeSmm module makes some assumptions about GDT selectors
that are based on the GDT layout from the DxeIplPeim.  For example,
the protected mode entry code and (where appropriate) the long mode
entry code in the UefiCpuPkg/PiSmmCpuDxeSmm/*/MpFuncs.* assembly
files, which are used during S3 resume, open-code segment selector
values that depend on DxeIplPeim's GDT layout.

This updates the CpuDxe module to use the same GDT layout as the
DxeIplPeim.  This enables modules that are dispatched after
CpuDxe to find, and potentially save and restore, a GDT layout that
matches that of DxeIplPeim.  The DxeIplPeim has a 2 GDT entries for
data selectors that are identical.  These are LINEAR_SEL (GDT Offset
0x08)and LINEAR_DATA64_SEL (GDT offset 0x30).  LINEAL_SEL is used for
for IA32 DXE and the LINEAR_DATA64_SEL is used for X64 DXE. This
duplicate data selector was added to the CpuDxe module to keep the
GDT and all selectors consistent.

Using a consistent GDT also improves debug experience.

Reported-by: Laszlo Ersek 
Analyzed-by: Laszlo Ersek 
Link: http://article.gmane.org/gmane.comp.bios.edk2.devel/3568
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Michael Kinney 

[Xen-devel] [xen-4.4-testing baseline-only test] 38236: tolerable FAIL

2015-11-01 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 38236 xen-4.4-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/38236/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt 11 guest-start  fail   never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/checkfail never pass

version targeted for testing:
 xen  73b70e3c5d59e63126c890068ee0cbf8a2a3b640
baseline version:
 xen  e321898a39222ad1feef352d65f71cef362b4a16

Last test of basis38198  2015-10-22 13:55:20 Z   10 days
Testing same since38236  2015-11-01 15:50:10 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 

jobs:
 build-amd64-xend pass
 build-i386-xend  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass
 test-amd64-amd64-rumpuserxen-amd64   blocked
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-xl-qemut-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-credit2  pass
 

[Xen-devel] [linux-linus test] 63398: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63398 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63398/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 16 guest-localmigrate/x10 
fail REGR. vs. 59254
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 13 guest-localmigrate 
fail REGR. vs. 59254

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail REGR. vs. 59254
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail   like 59254
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 59254
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 59254

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass

version targeted for testing:
 linux38dab9ac1c017e96dc98e978111e365134d41d13
baseline version:
 linux45820c294fe1b1a9df495d57f40585ef2d069a39

Last test of basis59254  2015-07-09 04:20:48 Z  115 days
Failing since 59348  2015-07-10 04:24:05 Z  114 days   72 attempts
Testing same since63398  2015-10-31 14:53:56 Z1 days1 attempts


2484 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl 

[Xen-devel] [PATCH v4 08/10] xen/blkback: get the number of hardware queues/rings from blkfront

2015-11-01 Thread Bob Liu
Backend advertises "multi-queue-max-queues" to front, then get the negotiated
number from "multi-queue-num-queues" wrote by blkfront.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkback/blkback.c | 11 +++
 drivers/block/xen-blkback/common.h  |  1 +
 drivers/block/xen-blkback/xenbus.c  | 35 +--
 3 files changed, 41 insertions(+), 6 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c 
b/drivers/block/xen-blkback/blkback.c
index eaf7ec0..107cc4a 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -83,6 +83,11 @@ module_param_named(max_persistent_grants, 
xen_blkif_max_pgrants, int, 0644);
 MODULE_PARM_DESC(max_persistent_grants,
  "Maximum number of grants to map persistently");
 
+unsigned int xenblk_max_queues;
+module_param_named(max_queues, xenblk_max_queues, uint, 0644);
+MODULE_PARM_DESC(max_queues,
+"Maximum number of hardware queues per virtual disk");
+
 /*
  * Maximum order of pages to be used for the shared ring between front and
  * backend, 4KB page granularity is used.
@@ -1478,6 +1483,12 @@ static int __init xen_blkif_init(void)
xen_blkif_max_ring_order = XENBUS_MAX_RING_PAGE_ORDER;
}
 
+   /* Allow as many queues as there are CPUs if user has not
+* specified a value.
+*/
+   if (xenblk_max_queues == 0)
+   xenblk_max_queues = num_online_cpus();
+
rc = xen_blkif_interface_init();
if (rc)
goto failed_init;
diff --git a/drivers/block/xen-blkback/common.h 
b/drivers/block/xen-blkback/common.h
index 4de1326..fb28b91 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -45,6 +45,7 @@
 #include 
 
 extern unsigned int xen_blkif_max_ring_order;
+extern unsigned int xenblk_max_queues;
 /*
  * This is the maximum number of segments that would be allowed in indirect
  * requests. This value will also be passed to the frontend.
diff --git a/drivers/block/xen-blkback/xenbus.c 
b/drivers/block/xen-blkback/xenbus.c
index ac4b458..cafbadd 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -181,12 +181,6 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
INIT_LIST_HEAD(>persistent_purge_list);
INIT_WORK(>persistent_purge_work, xen_blkbk_unmap_purged_grants);
 
-   blkif->nr_rings = 1;
-   if (xen_blkif_alloc_rings(blkif)) {
-   kmem_cache_free(xen_blkif_cachep, blkif);
-   return ERR_PTR(-ENOMEM);
-   }
-
return blkif;
 }
 
@@ -606,6 +600,14 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
goto fail;
}
 
+   /* Multi-queue: write how many queues are supported by the backend. */
+   err = xenbus_printf(XBT_NIL, dev->nodename,
+   "multi-queue-max-queues", "%u", xenblk_max_queues);
+   if (err) {
+   pr_warn("Error writing multi-queue-num-queues\n");
+   goto fail;
+   }
+
/* setup back pointer */
be->blkif->be = be;
 
@@ -997,6 +999,7 @@ static int connect_ring(struct backend_info *be)
char *xspath;
size_t xspathsize;
const size_t xenstore_path_ext_size = 11; /* sufficient for 
"/queue-NNN" */
+   unsigned int requested_num_queues = 0;
 
pr_debug("%s %s\n", __func__, dev->otherend);
 
@@ -1024,6 +1027,26 @@ static int connect_ring(struct backend_info *be)
be->blkif->vbd.feature_gnt_persistent = pers_grants;
be->blkif->vbd.overflow_max_grants = 0;
 
+   /*
+* Read the number of hardware queues from frontend.
+*/
+   err = xenbus_scanf(XBT_NIL, dev->otherend, "multi-queue-num-queues", 
"%u", _num_queues);
+   if (err < 0) {
+   requested_num_queues = 1;
+   } else {
+   if (requested_num_queues > xenblk_max_queues
+   || requested_num_queues == 0) {
+   /* buggy or malicious guest */
+   xenbus_dev_fatal(dev, err,
+   "guest requested %u queues, exceeding 
the maximum of %u.",
+   requested_num_queues, 
xenblk_max_queues);
+   return -1;
+   }
+   }
+   be->blkif->nr_rings = requested_num_queues;
+   if (xen_blkif_alloc_rings(be->blkif))
+   return -ENOMEM;
+
pr_info("nr_rings:%d protocol %d (%s) %s\n", be->blkif->nr_rings,
 be->blkif->blk_protocol, protocol,
 pers_grants ? "persistent grants" : "");
-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 09/10] xen/blkfront: make persistent grants per-queue

2015-11-01 Thread Bob Liu
Make persistent grants per-queue/ring instead of per-device, so that we can
drop the 'dev_lock' and get better scalability.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 89 +---
 1 file changed, 34 insertions(+), 55 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 23096d7..eb19f08 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -133,6 +133,8 @@ struct blkfront_ring_info {
struct gnttab_free_callback callback;
struct blk_shadow shadow[BLK_MAX_RING_SIZE];
struct list_head indirect_pages;
+   struct list_head grants;
+   unsigned int persistent_gnts_c;
unsigned long shadow_free;
struct blkfront_info *dev_info;
 };
@@ -144,8 +146,6 @@ struct blkfront_ring_info {
  */
 struct blkfront_info
 {
-   /* Lock to proect info->grants list shared by multi rings */
-   spinlock_t dev_lock;
struct mutex mutex;
struct xenbus_device *xbdev;
struct gendisk *gd;
@@ -155,8 +155,6 @@ struct blkfront_info
/* Number of pages per ring buffer */
unsigned int nr_ring_pages;
struct request_queue *rq;
-   struct list_head grants;
-   unsigned int persistent_gnts_c;
unsigned int feature_flush;
unsigned int feature_discard:1;
unsigned int feature_secdiscard:1;
@@ -231,7 +229,6 @@ static int fill_grant_buffer(struct blkfront_ring_info 
*rinfo, int num)
struct grant *gnt_list_entry, *n;
int i = 0;
 
-   spin_lock_irq(>dev_lock);
while(i < num) {
gnt_list_entry = kzalloc(sizeof(struct grant), GFP_NOIO);
if (!gnt_list_entry)
@@ -247,35 +244,32 @@ static int fill_grant_buffer(struct blkfront_ring_info 
*rinfo, int num)
}
 
gnt_list_entry->gref = GRANT_INVALID_REF;
-   list_add(_list_entry->node, >grants);
+   list_add(_list_entry->node, >grants);
i++;
}
-   spin_unlock_irq(>dev_lock);
 
return 0;
 
 out_of_memory:
list_for_each_entry_safe(gnt_list_entry, n,
->grants, node) {
+>grants, node) {
list_del(_list_entry->node);
if (info->feature_persistent)
__free_page(pfn_to_page(gnt_list_entry->pfn));
kfree(gnt_list_entry);
i--;
}
-   spin_unlock_irq(>dev_lock);
BUG_ON(i != 0);
return -ENOMEM;
 }
 
 static struct grant *get_grant(grant_ref_t *gref_head,
unsigned long pfn,
-   struct blkfront_info *info)
+  struct blkfront_ring_info *info)
 {
struct grant *gnt_list_entry;
unsigned long buffer_gfn;
 
-   spin_lock(>dev_lock);
BUG_ON(list_empty(>grants));
gnt_list_entry = list_first_entry(>grants, struct grant,
  node);
@@ -283,21 +277,19 @@ static struct grant *get_grant(grant_ref_t *gref_head,
 
if (gnt_list_entry->gref != GRANT_INVALID_REF) {
info->persistent_gnts_c--;
-   spin_unlock(>dev_lock);
return gnt_list_entry;
}
-   spin_unlock(>dev_lock);
 
/* Assign a gref to this page */
gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
BUG_ON(gnt_list_entry->gref == -ENOSPC);
-   if (!info->feature_persistent) {
+   if (!info->dev_info->feature_persistent) {
BUG_ON(!pfn);
gnt_list_entry->pfn = pfn;
}
buffer_gfn = pfn_to_gfn(gnt_list_entry->pfn);
gnttab_grant_foreign_access_ref(gnt_list_entry->gref,
-   info->xbdev->otherend_id,
+   info->dev_info->xbdev->otherend_id,
buffer_gfn, 0);
return gnt_list_entry;
 }
@@ -559,13 +551,13 @@ static int blkif_queue_request(struct request *req, 
struct blkfront_ring_info *r
list_del(_page->lru);
pfn = page_to_pfn(indirect_page);
}
-   gnt_list_entry = get_grant(_head, pfn, 
info);
+   gnt_list_entry = get_grant(_head, pfn, 
rinfo);
rinfo->shadow[id].indirect_grants[n] = 
gnt_list_entry;
segments = 
kmap_atomic(pfn_to_page(gnt_list_entry->pfn));
ring_req->u.indirect.indirect_grefs[n] = 
gnt_list_entry->gref;
}
 
-   gnt_list_entry = get_grant(_head, 
page_to_pfn(sg_page(sg)), info);
+   gnt_list_entry = get_grant(_head, 

[Xen-devel] [PATCH v4 06/10] xen/blkback: separate ring information out of struct xen_blkif

2015-11-01 Thread Bob Liu
Split per ring information to an new structure "xen_blkif_ring", so that one vbd
device can associate with one or more rings/hardware queues.

Introduce 'pers_gnts_lock' to protect the pool of persistent grants since we
may have multi backend threads.

This patch is a preparation for supporting multi hardware queues/rings.

Signed-off-by: Arianna Avanzini 
Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkback/blkback.c | 233 
 drivers/block/xen-blkback/common.h  |  64 ++
 drivers/block/xen-blkback/xenbus.c  | 107 ++---
 3 files changed, 234 insertions(+), 170 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c 
b/drivers/block/xen-blkback/blkback.c
index 6a685ae..eaf7ec0 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -173,11 +173,11 @@ static inline void shrink_free_pagepool(struct xen_blkif 
*blkif, int num)
 
 #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
 
-static int do_block_io_op(struct xen_blkif *blkif);
-static int dispatch_rw_block_io(struct xen_blkif *blkif,
+static int do_block_io_op(struct xen_blkif_ring *ring);
+static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
struct blkif_request *req,
struct pending_req *pending_req);
-static void make_response(struct xen_blkif *blkif, u64 id,
+static void make_response(struct xen_blkif_ring *ring, u64 id,
  unsigned short op, int st);
 
 #define foreach_grant_safe(pos, n, rbtree, node) \
@@ -189,14 +189,8 @@ static void make_response(struct xen_blkif *blkif, u64 id,
 
 
 /*
- * We don't need locking around the persistent grant helpers
- * because blkback uses a single-thread for each backed, so we
- * can be sure that this functions will never be called recursively.
- *
- * The only exception to that is put_persistent_grant, that can be called
- * from interrupt context (by xen_blkbk_unmap), so we have to use atomic
- * bit operations to modify the flags of a persistent grant and to count
- * the number of used grants.
+ * pers_gnts_lock must be used around all the persistent grant helpers
+ * because blkback may use multi-thread/queue for each backend.
  */
 static int add_persistent_gnt(struct xen_blkif *blkif,
   struct persistent_gnt *persistent_gnt)
@@ -322,11 +316,13 @@ void xen_blkbk_unmap_purged_grants(struct work_struct 
*work)
int segs_to_unmap = 0;
struct xen_blkif *blkif = container_of(work, typeof(*blkif), 
persistent_purge_work);
struct gntab_unmap_queue_data unmap_data;
+   unsigned long flags;
 
unmap_data.pages = pages;
unmap_data.unmap_ops = unmap;
unmap_data.kunmap_ops = NULL;
 
+   spin_lock_irqsave(>pers_gnts_lock, flags);
while(!list_empty(>persistent_purge_list)) {
persistent_gnt = list_first_entry(>persistent_purge_list,
  struct persistent_gnt,
@@ -348,6 +344,7 @@ void xen_blkbk_unmap_purged_grants(struct work_struct *work)
}
kfree(persistent_gnt);
}
+   spin_unlock_irqrestore(>pers_gnts_lock, flags);
if (segs_to_unmap > 0) {
unmap_data.count = segs_to_unmap;
BUG_ON(gnttab_unmap_refs_sync(_data));
@@ -362,16 +359,18 @@ static void purge_persistent_gnt(struct xen_blkif *blkif)
unsigned int num_clean, total;
bool scan_used = false, clean_used = false;
struct rb_root *root;
+   unsigned long flags;
 
+   spin_lock_irqsave(>pers_gnts_lock, flags);
if (blkif->persistent_gnt_c < xen_blkif_max_pgrants ||
(blkif->persistent_gnt_c == xen_blkif_max_pgrants &&
!blkif->vbd.overflow_max_grants)) {
-   return;
+   goto out;
}
 
if (work_busy(>persistent_purge_work)) {
pr_alert_ratelimited("Scheduled work from previous purge is 
still busy, cannot purge list\n");
-   return;
+   goto out;
}
 
num_clean = (xen_blkif_max_pgrants / 100) * LRU_PERCENT_CLEAN;
@@ -379,7 +378,7 @@ static void purge_persistent_gnt(struct xen_blkif *blkif)
num_clean = min(blkif->persistent_gnt_c, num_clean);
if ((num_clean == 0) ||
(num_clean > (blkif->persistent_gnt_c - 
atomic_read(>persistent_gnt_in_use
-   return;
+   goto out;
 
/*
 * At this point, we can assure that there will be no calls
@@ -436,29 +435,35 @@ finished:
}
 
blkif->persistent_gnt_c -= (total - num_clean);
+   spin_unlock_irqrestore(>pers_gnts_lock, flags);
blkif->vbd.overflow_max_grants = 0;
 
/* We can defer this work */
schedule_work(>persistent_purge_work);
pr_debug("Purged %u/%u\n", (total - 

Re: [Xen-devel] Question about XEN Hypervisor MSR capability exposion to VMs

2015-11-01 Thread Zhang, Yang Z
Liuyingdong wrote on 2015-10-31:
> Hi All
> 
> We encountered a blue screen problem when live migrate
> Win8.1/Win2012R2 64bit VM from V3 processor to non-V3 processor
> sandbox, KVM does not has this problem.
> 
> After looking into the MSR capabilities, we found XEN hypervisor
> exposed bit 39 and bit 18 to the VM, from Intel manual bit 39 refers
> to reserve bit and should not be set, bit 18 refers to MWAIT/MONITOR

Reserved doesn't mean it must be zero or one. Can you help to check it on host?

> capability, from my understanding it should not exposed to the VM too.

Yes, the MWAIT/MONITOR should be hidden from guest.

> BTW, KVM does not expose bit 18/39 to the VM.
> 
> Below is the boot message: (XEN) read msr: ecx=c083,
> msr_value=0xf80028ddf240 (XEN) read msr: ecx=1a0,
> msr_value=0x4000801889
> ~ (XEN)
> write msr:msr=4071, msr_value=0x100082f (XEN) write
> msr:msr=4070, msr_value=0x0 (XEN) write msr:msr=4071,
> msr_value=0x200082f (XEN) write msr:msr=4070, msr_value=0x0
> (XEN) read msr: ecx=17, msr_value=0x0 (XEN) write msr:msr=8b,
> msr_value=0x0 (XEN) read msr: ecx=8b, msr_value=0x2d



Best regards,
Yang



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing

2015-11-01 Thread Hu, Robert
> -Original Message-
> From: Ian Jackson [mailto:ian.jack...@eu.citrix.com]
> Sent: Saturday, September 26, 2015 3:15 AM
> To: xen-de...@lists.xenproject.org
> Cc: Hu, Robert ; Ian Campbell
> ; Ian Jackson 
> Subject: [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing
> 
> This is the second part of v14 Robert Ho's osstest patch series to
> support nested HVM tests.
> 
> It is also available here:
>   git://xenbits.xen.org/people/iwj/xen.git
>   http://xenbits.xen.org/git-http/people/iwj/xen.git
> in wip.nested-hvm.v14.part1..wip.nested-hvm.v14
> 
> Compared to Robert's v13, which was passed to me by private email,
>  * I have rebased onto current osstest pretest;
>  * I have changed how selecthost() is told it's dealing with
>a nested host (in practice, L1 guest);
>  * There are a large number of minor cleanups;
>  * There are some new preparatory cleanup and admin patches;
>  * I have rewritten almost all of the commit messages.
> 
> However, I have done only VERY LIMITED testing.  Much of the code here
> is UNTESTED since my changes.  My testing was confined to:
>  * Verifying that my changes to cs-adjust-flight worked
>  * Checking that ad-hoc runs of ts-host-reboot and ts-host-powercycle
>seemed to work when a guest was specified on the command line.
> 
> Robert, you kindly volunteered to test a revised version of this
> series.  I would appreciate if you would check that all of this still
> works as you expect.  I expect there will be some bugs, perhaps even
> very silly bugs, introduced by me.
> 
> I noticed that this series lacks guest serial debug keys and log
> collection for the L1 guest, because there is no
> Osstest/Serial/guest.pm.  I would appreciate it if you would provide
> one.  I don't think it needs to actually collect any logs, because the
> L1 serial output log will be collected as part of the L0 log
> collection.  But it ought to support sending debug keys to the L1
> guest.  When you have provided it you can (in the same patch) fix the
> corresponding `todo' in selecthost, changing `noop' to `guest'.
[Hu, Robert] 

Hi Ian,
Are you sure would like me to add this part? I took glance at the module
code (noop, xenuse, etc.), didn't quite understand.
I can imitate them for the Serial::guest.pm, but afraid will not that good.

> 
> 
> Workflow:
> 
> Robert: I'm handing this (what I have called `part 2') over to you
> now.
> 
> When you make changes, feel free to either rebase, or to make fixup
> commits (perhaps in `git-rebase -i --autosquash' format) on top.  If
> you do the latter then you'll probably want to pass that to me as a
> git branch (via git push to xenbits or emailing me a git bundle),
> since `squash!' and `fixup!' commits don't look good in email :-).
> 
> If you rebase, please put changes
>v15: 
> in the commit messages, as I have done myself in v14.  Leave my v14
> notes in place.
[Hu, Robert] 

Now I've completed this part of work. Am I going to hand over the v15 bundle
to you, with the above unresolved?
Current changes based on your patch:
* Some fixed (already get your confirmation) squashed into original patches, 
with
v15 annotation. 
* 2 fixes (not get your confirmation) are separated as !fixup patch for your 
clear
review; actually only 1 explicit fixup patch, the other was by mistake squashed 
in
but I made the annotation clearly.
* 2 more patches added, you've already been aware of:
Osstest/Testsupport.pm: change target's default kernkind to 'pvops'
Osstest/Testsupport.pm: use get_target_property() for some host setup


> 
> Of course if you have any comments or queries about how I have done
> things, they would be very welcome.
> 
> Please do not rebase any of the commits in wip.nested-hvm.v14.part1.
> If you discover bugs in `part 1' please let us know as I have fed that
> into the osstest self-test mill with the expectation that it will go
> into production.
> 
> I do not expect you to test the changes to cs-adjust-flight.  I have
> done that.  Indeed they are not really related to the Nested HVM work
> and Ian C and I may pick them up in another series.
> 
> 
> Ian Campbell: You probably want to defer re-reviewing this until
> Robert reports back.
> 
> Signed-off-by: Ian Jackson 
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 07/10] xen/blkback: pseudo support for multi hardware queues/rings

2015-11-01 Thread Bob Liu
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.

Signed-off-by: Arianna Avanzini 
Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkback/common.h |   3 +-
 drivers/block/xen-blkback/xenbus.c | 292 +++--
 2 files changed, 185 insertions(+), 110 deletions(-)

diff --git a/drivers/block/xen-blkback/common.h 
b/drivers/block/xen-blkback/common.h
index f0dd69a..4de1326 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -341,7 +341,8 @@ struct xen_blkif {
struct work_struct  free_work;
unsigned int nr_ring_pages;
/* All rings for this device */
-   struct xen_blkif_ring ring;
+   struct xen_blkif_ring *rings;
+   unsigned int nr_rings;
 };
 
 struct seg_buf {
diff --git a/drivers/block/xen-blkback/xenbus.c 
b/drivers/block/xen-blkback/xenbus.c
index 7bdd5fd..ac4b458 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -84,11 +84,12 @@ static int blkback_name(struct xen_blkif *blkif, char *buf)
 
 static void xen_update_blkif_status(struct xen_blkif *blkif)
 {
-   int err;
+   int err, i;
char name[BLKBACK_NAME_LEN];
+   struct xen_blkif_ring *ring;
 
/* Not ready to connect? */
-   if (!blkif->ring.irq || !blkif->vbd.bdev)
+   if (!blkif->rings || !blkif->rings[0].irq || !blkif->vbd.bdev)
return;
 
/* Already connected? */
@@ -113,19 +114,57 @@ static void xen_update_blkif_status(struct xen_blkif 
*blkif)
}
invalidate_inode_pages2(blkif->vbd.bdev->bd_inode->i_mapping);
 
-   blkif->ring.xenblkd = kthread_run(xen_blkif_schedule, >ring, 
"%s", name);
-   if (IS_ERR(blkif->ring.xenblkd)) {
-   err = PTR_ERR(blkif->ring.xenblkd);
-   blkif->ring.xenblkd = NULL;
-   xenbus_dev_error(blkif->be->dev, err, "start xenblkd");
-   return;
+   if (blkif->nr_rings == 1) {
+   blkif->rings[0].xenblkd = kthread_run(xen_blkif_schedule, 
>rings[0], "%s", name);
+   if (IS_ERR(blkif->rings[0].xenblkd)) {
+   err = PTR_ERR(blkif->rings[0].xenblkd);
+   blkif->rings[0].xenblkd = NULL;
+   xenbus_dev_error(blkif->be->dev, err, "start xenblkd");
+   return;
+   }
+   } else {
+   for (i = 0; i < blkif->nr_rings; i++) {
+   ring = >rings[i];
+   ring->xenblkd = kthread_run(xen_blkif_schedule, ring, 
"%s-%d", name, i);
+   if (IS_ERR(ring->xenblkd)) {
+   err = PTR_ERR(ring->xenblkd);
+   ring->xenblkd = NULL;
+   xenbus_dev_error(blkif->be->dev, err,
+   "start %s-%d xenblkd", name, i);
+   return;
+   }
+   }
+   }
+}
+
+static int xen_blkif_alloc_rings(struct xen_blkif *blkif)
+{
+   int r;
+
+   blkif->rings = kzalloc(blkif->nr_rings * sizeof(struct xen_blkif_ring), 
GFP_KERNEL);
+   if (!blkif->rings)
+   return -ENOMEM;
+
+   for (r = 0; r < blkif->nr_rings; r++) {
+   struct xen_blkif_ring *ring = >rings[r];
+
+   spin_lock_init(>blk_ring_lock);
+   init_waitqueue_head(>wq);
+   INIT_LIST_HEAD(>pending_free);
+
+   spin_lock_init(>pending_free_lock);
+   init_waitqueue_head(>pending_free_wq);
+   init_waitqueue_head(>shutdown_wq);
+   ring->blkif = blkif;
+   xen_blkif_get(blkif);
}
+
+   return 0;
 }
 
 static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 {
struct xen_blkif *blkif;
-   struct xen_blkif_ring *ring;
 
BUILD_BUG_ON(MAX_INDIRECT_PAGES > BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST);
 
@@ -136,27 +175,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
blkif->domid = domid;
atomic_set(>refcnt, 1);
init_completion(>drain_complete);
-   atomic_set(>drain, 0);
INIT_WORK(>free_work, xen_blkif_deferred_free);
spin_lock_init(>free_pages_lock);
INIT_LIST_HEAD(>free_pages);
-   blkif->free_pages_num = 0;
-   blkif->persistent_gnts.rb_node = NULL;
INIT_LIST_HEAD(>persistent_purge_list);
-   atomic_set(>persistent_gnt_in_use, 0);
INIT_WORK(>persistent_purge_work, xen_blkbk_unmap_purged_grants);
 
-   ring = >ring;
-   ring->blkif = blkif;
-   spin_lock_init(>blk_ring_lock);
-   init_waitqueue_head(>wq);
-   ring->st_print = jiffies;
-   atomic_set(>inflight, 0);
-
-   

[Xen-devel] [PATCH v4 10/10] xen/blkback: make pool of persistent grants and free pages per-queue

2015-11-01 Thread Bob Liu
Make pool of persistent grants and free pages per-queue/ring instead of
per-device to get better scalability.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkback/blkback.c | 212 +---
 drivers/block/xen-blkback/common.h  |  32 +++---
 drivers/block/xen-blkback/xenbus.c  |  21 ++--
 3 files changed, 124 insertions(+), 141 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c 
b/drivers/block/xen-blkback/blkback.c
index 107cc4a..28cbdae 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -118,60 +118,60 @@ module_param(log_stats, int, 0644);
 /* Number of free pages to remove on each call to gnttab_free_pages */
 #define NUM_BATCH_FREE_PAGES 10
 
-static inline int get_free_page(struct xen_blkif *blkif, struct page **page)
+static inline int get_free_page(struct xen_blkif_ring *ring, struct page 
**page)
 {
unsigned long flags;
 
-   spin_lock_irqsave(>free_pages_lock, flags);
-   if (list_empty(>free_pages)) {
-   BUG_ON(blkif->free_pages_num != 0);
-   spin_unlock_irqrestore(>free_pages_lock, flags);
+   spin_lock_irqsave(>free_pages_lock, flags);
+   if (list_empty(>free_pages)) {
+   BUG_ON(ring->free_pages_num != 0);
+   spin_unlock_irqrestore(>free_pages_lock, flags);
return gnttab_alloc_pages(1, page);
}
-   BUG_ON(blkif->free_pages_num == 0);
-   page[0] = list_first_entry(>free_pages, struct page, lru);
+   BUG_ON(ring->free_pages_num == 0);
+   page[0] = list_first_entry(>free_pages, struct page, lru);
list_del([0]->lru);
-   blkif->free_pages_num--;
-   spin_unlock_irqrestore(>free_pages_lock, flags);
+   ring->free_pages_num--;
+   spin_unlock_irqrestore(>free_pages_lock, flags);
 
return 0;
 }
 
-static inline void put_free_pages(struct xen_blkif *blkif, struct page **page,
+static inline void put_free_pages(struct xen_blkif_ring *ring, struct page 
**page,
   int num)
 {
unsigned long flags;
int i;
 
-   spin_lock_irqsave(>free_pages_lock, flags);
+   spin_lock_irqsave(>free_pages_lock, flags);
for (i = 0; i < num; i++)
-   list_add([i]->lru, >free_pages);
-   blkif->free_pages_num += num;
-   spin_unlock_irqrestore(>free_pages_lock, flags);
+   list_add([i]->lru, >free_pages);
+   ring->free_pages_num += num;
+   spin_unlock_irqrestore(>free_pages_lock, flags);
 }
 
-static inline void shrink_free_pagepool(struct xen_blkif *blkif, int num)
+static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int num)
 {
/* Remove requested pages in batches of NUM_BATCH_FREE_PAGES */
struct page *page[NUM_BATCH_FREE_PAGES];
unsigned int num_pages = 0;
unsigned long flags;
 
-   spin_lock_irqsave(>free_pages_lock, flags);
-   while (blkif->free_pages_num > num) {
-   BUG_ON(list_empty(>free_pages));
-   page[num_pages] = list_first_entry(>free_pages,
+   spin_lock_irqsave(>free_pages_lock, flags);
+   while (ring->free_pages_num > num) {
+   BUG_ON(list_empty(>free_pages));
+   page[num_pages] = list_first_entry(>free_pages,
   struct page, lru);
list_del([num_pages]->lru);
-   blkif->free_pages_num--;
+   ring->free_pages_num--;
if (++num_pages == NUM_BATCH_FREE_PAGES) {
-   spin_unlock_irqrestore(>free_pages_lock, flags);
+   spin_unlock_irqrestore(>free_pages_lock, flags);
gnttab_free_pages(num_pages, page);
-   spin_lock_irqsave(>free_pages_lock, flags);
+   spin_lock_irqsave(>free_pages_lock, flags);
num_pages = 0;
}
}
-   spin_unlock_irqrestore(>free_pages_lock, flags);
+   spin_unlock_irqrestore(>free_pages_lock, flags);
if (num_pages != 0)
gnttab_free_pages(num_pages, page);
 }
@@ -194,22 +194,29 @@ static void make_response(struct xen_blkif_ring *ring, 
u64 id,
 
 
 /*
- * pers_gnts_lock must be used around all the persistent grant helpers
- * because blkback may use multi-thread/queue for each backend.
+ * We don't need locking around the persistent grant helpers
+ * because blkback uses a single-thread for each backed, so we
+ * can be sure that this functions will never be called recursively.
+ *
+ * The only exception to that is put_persistent_grant, that can be called
+ * from interrupt context (by xen_blkbk_unmap), so we have to use atomic
+ * bit operations to modify the flags of a persistent grant and to count
+ * the number of used grants.
  */
-static int add_persistent_gnt(struct xen_blkif *blkif,
+static int add_persistent_gnt(struct xen_blkif_ring 

[Xen-devel] [PATCH v4 03/10] xen/blkfront: pseudo support for multi hardware queues/rings

2015-11-01 Thread Bob Liu
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 327 +--
 1 file changed, 188 insertions(+), 139 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 2a557e4..eab78e7 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -145,6 +145,7 @@ struct blkfront_info
int vdevice;
blkif_vdev_t handle;
enum blkif_state connected;
+   /* Number of pages per ring buffer */
unsigned int nr_ring_pages;
struct request_queue *rq;
struct list_head grants;
@@ -158,7 +159,8 @@ struct blkfront_info
unsigned int max_indirect_segments;
int is_ready;
struct blk_mq_tag_set tag_set;
-   struct blkfront_ring_info rinfo;
+   struct blkfront_ring_info *rinfo;
+   unsigned int nr_rings;
 };
 
 static unsigned int nr_minors;
@@ -190,7 +192,7 @@ static DEFINE_SPINLOCK(minor_lock);
((_segs + SEGS_PER_INDIRECT_FRAME - 1)/SEGS_PER_INDIRECT_FRAME)
 
 static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
-static int blkfront_gather_backend_features(struct blkfront_info *info);
+static void blkfront_gather_backend_features(struct blkfront_info *info);
 
 static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
@@ -443,12 +445,13 @@ static int blkif_queue_request(struct request *req, 
struct blkfront_ring_info *r
 */
max_grefs += INDIRECT_GREFS(req->nr_phys_segments);
 
-   /* Check if we have enough grants to allocate a requests */
-   if (info->persistent_gnts_c < max_grefs) {
+   /* Check if we have enough grants to allocate a requests, we have to
+* reserve 'max_grefs' grants because persistent grants are shared by 
all
+* rings */
+   if (0 < max_grefs) {
new_persistent_gnts = 1;
if (gnttab_alloc_grant_references(
-   max_grefs - info->persistent_gnts_c,
-   _head) < 0) {
+   max_grefs, _head) < 0) {
gnttab_request_free_callback(
>callback,
blkif_restart_queue_callback,
@@ -665,7 +668,7 @@ static int blk_mq_init_hctx(struct blk_mq_hw_ctx *hctx, 
void *data,
 {
struct blkfront_info *info = (struct blkfront_info *)data;
 
-   hctx->driver_data = >rinfo;
+   hctx->driver_data = >rinfo[index];
return 0;
 }
 
@@ -924,8 +927,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 
 static void xlvbd_release_gendisk(struct blkfront_info *info)
 {
-   unsigned int minor, nr_minors;
-   struct blkfront_ring_info *rinfo = >rinfo;
+   unsigned int minor, nr_minors, i;
 
if (info->rq == NULL)
return;
@@ -933,11 +935,15 @@ static void xlvbd_release_gendisk(struct blkfront_info 
*info)
/* No more blkif_request(). */
blk_mq_stop_hw_queues(info->rq);
 
-   /* No more gnttab callback work. */
-   gnttab_cancel_free_callback(>callback);
+   for (i = 0; i < info->nr_rings; i++) {
+   struct blkfront_ring_info *rinfo = >rinfo[i];
 
-   /* Flush gnttab callback work. Must be done with no locks held. */
-   flush_work(>work);
+   /* No more gnttab callback work. */
+   gnttab_cancel_free_callback(>callback);
+
+   /* Flush gnttab callback work. Must be done with no locks held. 
*/
+   flush_work(>work);
+   }
 
del_gendisk(info->gd);
 
@@ -970,37 +976,11 @@ static void blkif_restart_queue(struct work_struct *work)
spin_unlock_irq(>dev_info->io_lock);
 }
 
-static void blkif_free(struct blkfront_info *info, int suspend)
+static void blkif_free_ring(struct blkfront_ring_info *rinfo)
 {
struct grant *persistent_gnt;
-   struct grant *n;
+   struct blkfront_info *info = rinfo->dev_info;
int i, j, segs;
-   struct blkfront_ring_info *rinfo = >rinfo;
-
-   /* Prevent new requests being issued until we fix things up. */
-   spin_lock_irq(>io_lock);
-   info->connected = suspend ?
-   BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
-   /* No more blkif_request(). */
-   if (info->rq)
-   blk_mq_stop_hw_queues(info->rq);
-
-   /* Remove all persistent grants */
-   if (!list_empty(>grants)) {
-   list_for_each_entry_safe(persistent_gnt, n,
->grants, node) {
-   list_del(_gnt->node);
-   if (persistent_gnt->gref != GRANT_INVALID_REF) {
-   gnttab_end_foreign_access(persistent_gnt->gref,
-

[Xen-devel] [PATCH v4 02/10] xen/blkfront: separate per ring information out of device info

2015-11-01 Thread Bob Liu
Split per ring information to an new structure "blkfront_ring_info".

A ring is the representation of a hardware queue, every vbd device can associate
with one or more rings depending on how many hardware queues/rings to be used.

This patch is a preparation for supporting real multi hardware queues/rings.

Signed-off-by: Arianna Avanzini 
Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 321 ---
 1 file changed, 178 insertions(+), 143 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a69c02d..2a557e4 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -115,6 +115,23 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of 
pages to be used for the
 #define RINGREF_NAME_LEN (20)
 
 /*
+ *  Per-ring info.
+ *  Every blkfront device can associate with one or more blkfront_ring_info,
+ *  depending on how many hardware queues/rings to be used.
+ */
+struct blkfront_ring_info {
+   struct blkif_front_ring ring;
+   unsigned int ring_ref[XENBUS_MAX_RING_PAGES];
+   unsigned int evtchn, irq;
+   struct work_struct work;
+   struct gnttab_free_callback callback;
+   struct blk_shadow shadow[BLK_MAX_RING_SIZE];
+   struct list_head indirect_pages;
+   unsigned long shadow_free;
+   struct blkfront_info *dev_info;
+};
+
+/*
  * We have one of these per vbd, whether ide, scsi or 'other'.  They
  * hang in private_data off the gendisk structure. We may end up
  * putting all kinds of interesting stuff here :-)
@@ -128,18 +145,10 @@ struct blkfront_info
int vdevice;
blkif_vdev_t handle;
enum blkif_state connected;
-   int ring_ref[XENBUS_MAX_RING_PAGES];
unsigned int nr_ring_pages;
-   struct blkif_front_ring ring;
-   unsigned int evtchn, irq;
struct request_queue *rq;
-   struct work_struct work;
-   struct gnttab_free_callback callback;
-   struct blk_shadow shadow[BLK_MAX_RING_SIZE];
struct list_head grants;
-   struct list_head indirect_pages;
unsigned int persistent_gnts_c;
-   unsigned long shadow_free;
unsigned int feature_flush;
unsigned int feature_discard:1;
unsigned int feature_secdiscard:1;
@@ -149,6 +158,7 @@ struct blkfront_info
unsigned int max_indirect_segments;
int is_ready;
struct blk_mq_tag_set tag_set;
+   struct blkfront_ring_info rinfo;
 };
 
 static unsigned int nr_minors;
@@ -179,33 +189,35 @@ static DEFINE_SPINLOCK(minor_lock);
 #define INDIRECT_GREFS(_segs) \
((_segs + SEGS_PER_INDIRECT_FRAME - 1)/SEGS_PER_INDIRECT_FRAME)
 
-static int blkfront_setup_indirect(struct blkfront_info *info);
+static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
 static int blkfront_gather_backend_features(struct blkfront_info *info);
 
-static int get_id_from_freelist(struct blkfront_info *info)
+static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
-   unsigned long free = info->shadow_free;
-   BUG_ON(free >= BLK_RING_SIZE(info));
-   info->shadow_free = info->shadow[free].req.u.rw.id;
-   info->shadow[free].req.u.rw.id = 0x0fee; /* debug */
+   unsigned long free = rinfo->shadow_free;
+
+   BUG_ON(free >= BLK_RING_SIZE(rinfo->dev_info));
+   rinfo->shadow_free = rinfo->shadow[free].req.u.rw.id;
+   rinfo->shadow[free].req.u.rw.id = 0x0fee; /* debug */
return free;
 }
 
-static int add_id_to_freelist(struct blkfront_info *info,
+static int add_id_to_freelist(struct blkfront_ring_info *rinfo,
   unsigned long id)
 {
-   if (info->shadow[id].req.u.rw.id != id)
+   if (rinfo->shadow[id].req.u.rw.id != id)
return -EINVAL;
-   if (info->shadow[id].request == NULL)
+   if (rinfo->shadow[id].request == NULL)
return -EINVAL;
-   info->shadow[id].req.u.rw.id  = info->shadow_free;
-   info->shadow[id].request = NULL;
-   info->shadow_free = id;
+   rinfo->shadow[id].req.u.rw.id  = rinfo->shadow_free;
+   rinfo->shadow[id].request = NULL;
+   rinfo->shadow_free = id;
return 0;
 }
 
-static int fill_grant_buffer(struct blkfront_info *info, int num)
+static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num)
 {
+   struct blkfront_info *info = rinfo->dev_info;
struct page *granted_page;
struct grant *gnt_list_entry, *n;
int i = 0;
@@ -341,8 +353,8 @@ static void xlbd_release_minors(unsigned int minor, 
unsigned int nr)
 
 static void blkif_restart_queue_callback(void *arg)
 {
-   struct blkfront_info *info = (struct blkfront_info *)arg;
-   schedule_work(>work);
+   struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)arg;
+   schedule_work(>work);
 }
 
 static int blkif_getgeo(struct block_device *bd, struct hd_geometry 

[Xen-devel] [PATCH v4 05/10] xen/blkfront: negotiate number of queues/rings to be used with backend

2015-11-01 Thread Bob Liu
The number of hardware queues for xen/blkfront is set by parameter
'max_queues'(default 4), while the max value xen/blkback supported is notified
through xenstore("multi-queue-max-queues").

The negotiated number is the smaller one and would be written back to xenstore
as "multi-queue-num-queues", blkback need to read this negotiated number.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 166 +++
 1 file changed, 120 insertions(+), 46 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8cc5995..23096d7 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -98,6 +98,10 @@ static unsigned int xen_blkif_max_segments = 32;
 module_param_named(max, xen_blkif_max_segments, int, S_IRUGO);
 MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests 
(default is 32)");
 
+static unsigned int xen_blkif_max_queues = 4;
+module_param_named(max_queues, xen_blkif_max_queues, uint, S_IRUGO);
+MODULE_PARM_DESC(max_queues, "Maximum number of hardware queues/rings used per 
virtual disk");
+
 /*
  * Maximum order of pages to be used for the shared ring between front and
  * backend, 4KB page granularity is used.
@@ -113,6 +117,7 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of 
pages to be used for the
  * characters are enough. Define to 20 to keep consist with backend.
  */
 #define RINGREF_NAME_LEN (20)
+#define QUEUE_NAME_LEN (12)
 
 /*
  *  Per-ring info.
@@ -695,7 +700,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 
sector_size,
 
memset(>tag_set, 0, sizeof(info->tag_set));
info->tag_set.ops = _mq_ops;
-   info->tag_set.nr_hw_queues = 1;
+   info->tag_set.nr_hw_queues = info->nr_rings;
info->tag_set.queue_depth =  BLK_RING_SIZE(info);
info->tag_set.numa_node = NUMA_NO_NODE;
info->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
@@ -1352,6 +1357,51 @@ fail:
return err;
 }
 
+static int write_per_ring_nodes(struct xenbus_transaction xbt,
+   struct blkfront_ring_info *rinfo, const char 
*dir)
+{
+   int err, i;
+   const char *message = NULL;
+   struct blkfront_info *info = rinfo->dev_info;
+
+   if (info->nr_ring_pages == 1) {
+   err = xenbus_printf(xbt, dir, "ring-ref", "%u", 
rinfo->ring_ref[0]);
+   if (err) {
+   message = "writing ring-ref";
+   goto abort_transaction;
+   }
+   pr_info("%s: write ring-ref:%d\n", dir, rinfo->ring_ref[0]);
+   } else {
+   for (i = 0; i < info->nr_ring_pages; i++) {
+   char ring_ref_name[RINGREF_NAME_LEN];
+
+   snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", 
i);
+   err = xenbus_printf(xbt, dir, ring_ref_name,
+   "%u", rinfo->ring_ref[i]);
+   if (err) {
+   message = "writing ring-ref";
+   goto abort_transaction;
+   }
+   pr_info("%s: write ring-ref:%d\n", dir, 
rinfo->ring_ref[i]);
+   }
+   }
+
+   err = xenbus_printf(xbt, dir, "event-channel", "%u", rinfo->evtchn);
+   if (err) {
+   message = "writing event-channel";
+   goto abort_transaction;
+   }
+   pr_info("%s: write event-channel:%d\n", dir, rinfo->evtchn);
+
+   return 0;
+
+abort_transaction:
+   xenbus_transaction_end(xbt, 1);
+   if (message)
+   xenbus_dev_fatal(info->xbdev, err, "%s", message);
+
+   return err;
+}
 
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_blkback(struct xenbus_device *dev,
@@ -1362,7 +1412,6 @@ static int talk_to_blkback(struct xenbus_device *dev,
int err, i;
unsigned int max_page_order = 0;
unsigned int ring_page_order = 0;
-   struct blkfront_ring_info *rinfo;
 
err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
   "max-ring-page-order", "%u", _page_order);
@@ -1374,7 +1423,8 @@ static int talk_to_blkback(struct xenbus_device *dev,
}
 
for (i = 0; i < info->nr_rings; i++) {
-   rinfo = >rinfo[i];
+   struct blkfront_ring_info *rinfo = >rinfo[i];
+
/* Create shared ring, alloc event channel. */
err = setup_blkring(dev, rinfo);
if (err)
@@ -1388,45 +1438,51 @@ again:
goto destroy_blkring;
}
 
-   if (info->nr_rings == 1) {
-   rinfo = >rinfo[0];
-   if (info->nr_ring_pages == 1) {
-   err = xenbus_printf(xbt, dev->nodename,
-   "ring-ref", "%u", 
rinfo->ring_ref[0]);
-   if (err) {
- 

[Xen-devel] [PATCH v4 00/10] xen-block: multi hardware-queues/rings support

2015-11-01 Thread Bob Liu
Note: These patches were based on original work of Arianna's internship for
GNOME's Outreach Program for Women.

After using blk-mq api, a guest has more than one(nr_vpus) software request
queues associated with each block front. These queues can be mapped over several
rings(hardware queues) to the backend, making it very easy for us to run
multiple threads on the backend for a single virtual disk.

By having different threads issuing requests at the same time, the performance
of guest can be improved significantly.

Test was done based on null_blk driver:
dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk"
domU: v4.3-rc7 16vcpus 10GB

[test]
rw=read
direct=1
ioengine=libaio
bs=4k
time_based
runtime=30
filename=/dev/xvdb
numjobs=16
iodepth=64
iodepth_batch=64
iodepth_batch_complete=64
group_reporting

domU(orig)  4 queues8 queues16 queues
iops:690k   1024k(+30%) 800k750k

After patch 9 and 10:
domU(orig)  4 queues8 queues16 queues
iops:690k   1600k(+100%)   1450k1320k

Chart: https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0

Also see huge improvements for write and real SSD storage.

---
v4:
 * Rebase to v4.3-rc7
 * Comments from Roger

v3:
 * Rebased to v4.2-rc8

Bob Liu (10):
  xen/blkif: document blkif multi-queue/ring extension
  xen/blkfront: separate per ring information out of device info
  xen/blkfront: pseudo support for multi hardware queues/rings
  xen/blkfront: split per device io_lock
  xen/blkfront: negotiate number of queues/rings to be used with backend
  xen/blkback: separate ring information out of struct xen_blkif
  xen/blkback: pseudo support for multi hardware queues/rings
  xen/blkback: get the number of hardware queues/rings from blkfront
  xen/blkfront: make persistent grants per-queue
  xen/blkback: make pool of persistent grants and free pages per-queue

 drivers/block/xen-blkback/blkback.c | 386 ++-
 drivers/block/xen-blkback/common.h  |  78 ++--
 drivers/block/xen-blkback/xenbus.c  | 359 --
 drivers/block/xen-blkfront.c| 718 ++--
 include/xen/interface/io/blkif.h|  48 +++
 5 files changed, 971 insertions(+), 618 deletions(-)

-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 04/10] xen/blkfront: split per device io_lock

2015-11-01 Thread Bob Liu
The per device io_lock became a coarser grained lock after multi-queues/rings
was introduced, this patch introduced a fine-grained ring_lock for each ring.

The old io_lock was renamed to dev_lock and only protect the ->grants list
which is shared by all rings.

Signed-off-by: Bob Liu 
---
 drivers/block/xen-blkfront.c | 57 ++--
 1 file changed, 34 insertions(+), 23 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index eab78e7..8cc5995 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -121,6 +121,7 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of 
pages to be used for the
  */
 struct blkfront_ring_info {
struct blkif_front_ring ring;
+   spinlock_t ring_lock;
unsigned int ring_ref[XENBUS_MAX_RING_PAGES];
unsigned int evtchn, irq;
struct work_struct work;
@@ -138,7 +139,8 @@ struct blkfront_ring_info {
  */
 struct blkfront_info
 {
-   spinlock_t io_lock;
+   /* Lock to proect info->grants list shared by multi rings */
+   spinlock_t dev_lock;
struct mutex mutex;
struct xenbus_device *xbdev;
struct gendisk *gd;
@@ -224,6 +226,7 @@ static int fill_grant_buffer(struct blkfront_ring_info 
*rinfo, int num)
struct grant *gnt_list_entry, *n;
int i = 0;
 
+   spin_lock_irq(>dev_lock);
while(i < num) {
gnt_list_entry = kzalloc(sizeof(struct grant), GFP_NOIO);
if (!gnt_list_entry)
@@ -242,6 +245,7 @@ static int fill_grant_buffer(struct blkfront_ring_info 
*rinfo, int num)
list_add(_list_entry->node, >grants);
i++;
}
+   spin_unlock_irq(>dev_lock);
 
return 0;
 
@@ -254,6 +258,7 @@ out_of_memory:
kfree(gnt_list_entry);
i--;
}
+   spin_unlock_irq(>dev_lock);
BUG_ON(i != 0);
return -ENOMEM;
 }
@@ -265,6 +270,7 @@ static struct grant *get_grant(grant_ref_t *gref_head,
struct grant *gnt_list_entry;
unsigned long buffer_gfn;
 
+   spin_lock(>dev_lock);
BUG_ON(list_empty(>grants));
gnt_list_entry = list_first_entry(>grants, struct grant,
  node);
@@ -272,8 +278,10 @@ static struct grant *get_grant(grant_ref_t *gref_head,
 
if (gnt_list_entry->gref != GRANT_INVALID_REF) {
info->persistent_gnts_c--;
+   spin_unlock(>dev_lock);
return gnt_list_entry;
}
+   spin_unlock(>dev_lock);
 
/* Assign a gref to this page */
gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
@@ -639,7 +647,7 @@ static int blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info 
*)hctx->driver_data;
 
blk_mq_start_request(qd->rq);
-   spin_lock_irq(>io_lock);
+   spin_lock_irq(>ring_lock);
if (RING_FULL(>ring))
goto out_busy;
 
@@ -650,15 +658,15 @@ static int blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
goto out_busy;
 
flush_requests(rinfo);
-   spin_unlock_irq(>io_lock);
+   spin_unlock_irq(>ring_lock);
return BLK_MQ_RQ_QUEUE_OK;
 
 out_err:
-   spin_unlock_irq(>io_lock);
+   spin_unlock_irq(>ring_lock);
return BLK_MQ_RQ_QUEUE_ERROR;
 
 out_busy:
-   spin_unlock_irq(>io_lock);
+   spin_unlock_irq(>ring_lock);
blk_mq_stop_hw_queue(hctx);
return BLK_MQ_RQ_QUEUE_BUSY;
 }
@@ -959,21 +967,22 @@ static void xlvbd_release_gendisk(struct blkfront_info 
*info)
info->gd = NULL;
 }
 
-/* Must be called with io_lock holded */
 static void kick_pending_request_queues(struct blkfront_ring_info *rinfo)
 {
+   unsigned long flags;
+
+   spin_lock_irqsave(>ring_lock, flags);
if (!RING_FULL(>ring))
blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
+   spin_unlock_irqrestore(>ring_lock, flags);
 }
 
 static void blkif_restart_queue(struct work_struct *work)
 {
struct blkfront_ring_info *rinfo = container_of(work, struct 
blkfront_ring_info, work);
 
-   spin_lock_irq(>dev_info->io_lock);
if (rinfo->dev_info->connected == BLKIF_STATE_CONNECTED)
kick_pending_request_queues(rinfo);
-   spin_unlock_irq(>dev_info->io_lock);
 }
 
 static void blkif_free_ring(struct blkfront_ring_info *rinfo)
@@ -1065,7 +1074,7 @@ static void blkif_free(struct blkfront_info *info, int 
suspend)
int i;
 
/* Prevent new requests being issued until we fix things up. */
-   spin_lock_irq(>io_lock);
+   spin_lock_irq(>dev_lock);
info->connected = suspend ?
BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
/* No more blkif_request(). */
@@ -1091,7 +1100,7 @@ static void blkif_free(struct blkfront_info *info, int 
suspend)
 
for (i 

[Xen-devel] [PATCH v4 01/10] xen/blkif: document blkif multi-queue/ring extension

2015-11-01 Thread Bob Liu
Document the multi-queue/ring feature in terms of XenStore keys to be written by
the backend and by the frontend.

Signed-off-by: Bob Liu 
--
v2:
Add descriptions together with multi-page ring buffer.
---
 include/xen/interface/io/blkif.h | 48 
 1 file changed, 48 insertions(+)

diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/blkif.h
index c33e1c4..8b8cfad 100644
--- a/include/xen/interface/io/blkif.h
+++ b/include/xen/interface/io/blkif.h
@@ -28,6 +28,54 @@ typedef uint16_t blkif_vdev_t;
 typedef uint64_t blkif_sector_t;
 
 /*
+ * Multiple hardware queues/rings:
+ * If supported, the backend will write the key "multi-queue-max-queues" to
+ * the directory for that vbd, and set its value to the maximum supported
+ * number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues" with the number they wish to use, which must be
+ * greater than zero, and no more than the value reported by the backend in
+ * "multi-queue-max-queues".
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing those keys under sub-keys
+ * having the name "queue-N" where N is the integer ID of the queue/ring for
+ * which those keys belong. Queues are indexed from zero.
+ * For example, a frontend with two queues must write the following set of
+ * queue-related keys:
+ *
+ * /local/domain/1/device/vbd/0/multi-queue-num-queues = "2"
+ * /local/domain/1/device/vbd/0/queue-0 = ""
+ * /local/domain/1/device/vbd/0/queue-0/ring-ref = ""
+ * /local/domain/1/device/vbd/0/queue-0/event-channel = ""
+ * /local/domain/1/device/vbd/0/queue-1 = ""
+ * /local/domain/1/device/vbd/0/queue-1/ring-ref = ""
+ * /local/domain/1/device/vbd/0/queue-1/event-channel = ""
+ *
+ * It is also possible to use multiple queues/rings together with
+ * feature multi-page ring buffer.
+ * For example, a frontend requests two queues/rings and the size of each ring
+ * buffer is two pages must write the following set of related keys:
+ *
+ * /local/domain/1/device/vbd/0/multi-queue-num-queues = "2"
+ * /local/domain/1/device/vbd/0/ring-page-order = "1"
+ * /local/domain/1/device/vbd/0/queue-0 = ""
+ * /local/domain/1/device/vbd/0/queue-0/ring-ref0 = ""
+ * /local/domain/1/device/vbd/0/queue-0/ring-ref1 = ""
+ * /local/domain/1/device/vbd/0/queue-0/event-channel = ""
+ * /local/domain/1/device/vbd/0/queue-1 = ""
+ * /local/domain/1/device/vbd/0/queue-1/ring-ref0 = ""
+ * /local/domain/1/device/vbd/0/queue-1/ring-ref1 = ""
+ * /local/domain/1/device/vbd/0/queue-1/event-channel = ""
+ *
+ */
+
+/*
  * REQUEST CODES.
  */
 #define BLKIF_OP_READ  0
-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 02/10] xen/blkfront: separate per ring information out of device info

2015-11-01 Thread kbuild test robot
Hi Bob,

[auto build test ERROR on v4.3-rc7 -- if it's inappropriate base, please 
suggest rules for selecting the more suitable base]

url:
https://github.com/0day-ci/linux/commits/Bob-Liu/xen-block-multi-hardware-queues-rings-support/20151102-122806
config: x86_64-allyesconfig (attached as .config)
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64 

Note: the 
linux-review/Bob-Liu/xen-block-multi-hardware-queues-rings-support/20151102-122806
 HEAD b29fe44b095649f8faddc4474daba13199c1f5e0 builds fine.
  It only hurts bisectibility.

All errors (new ones prefixed by >>):

   drivers/block/xen-blkfront.c: In function 'blkif_queue_rq':
>> drivers/block/xen-blkfront.c:639:17: error: 'info' undeclared (first use in 
>> this function)
 spin_lock_irq(>io_lock);
^
   drivers/block/xen-blkfront.c:639:17: note: each undeclared identifier is 
reported only once for each function it appears in

vim +/info +639 drivers/block/xen-blkfront.c

907c3eb18 Bob Liu 2015-07-13  633  static int blkif_queue_rq(struct 
blk_mq_hw_ctx *hctx,
907c3eb18 Bob Liu 2015-07-13  634  const struct 
blk_mq_queue_data *qd)
9f27ee595 Jeremy Fitzhardinge 2007-07-17  635  {
2a8974fd4 Bob Liu 2015-11-02  636   struct blkfront_ring_info 
*rinfo = (struct blkfront_ring_info *)hctx->driver_data;
9f27ee595 Jeremy Fitzhardinge 2007-07-17  637  
907c3eb18 Bob Liu 2015-07-13  638   blk_mq_start_request(qd->rq);
907c3eb18 Bob Liu 2015-07-13 @639   spin_lock_irq(>io_lock);
2a8974fd4 Bob Liu 2015-11-02  640   if (RING_FULL(>ring))
907c3eb18 Bob Liu 2015-07-13  641   goto out_busy;
9f27ee595 Jeremy Fitzhardinge 2007-07-17  642  

:: The code at line 639 was first introduced by commit
:: 907c3eb18e0bd86ca12a9de80befe8e3647bac3e xen-blkfront: convert to blk-mq 
APIs

:: TO: Bob Liu 
:: CC: David Vrabel 

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: Binary data
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 63396: all pass - PUSHED

2015-11-01 Thread osstest service owner
flight 63396 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63396/

Perfect :-)
All tests in this flight passed
version targeted for testing:
 ovmf df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
baseline version:
 ovmf 843f8ca01bc195cd077f13512fe285e8db9a3984

Last test of basis63371  2015-10-30 02:04:22 Z2 days
Testing same since63396  2015-10-31 09:49:58 Z1 days1 attempts


People who touched revisions under test:
  Laszlo Ersek 
  Michael Kinney 
  Nagaraj Hegde 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
+ branch=ovmf
+ revision=df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.6-testing
+ '[' xdf60fb4cc2ca896fcea9e37b06c276d569f1a6b8 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' xgit://cache:9419/ '!=' x ']'
+++ echo 

[Xen-devel] [linux-3.14 test] 63395: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63395 linux-3.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63395/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops 5 kernel-build  fail REGR. vs. 62648

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 13 guest-localmigrate 
fail in 63368 pass in 63395
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail pass in 63368

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 16 
guest-localmigrate/x10 fail blocked in 62648
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 62648
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 62648
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 62648

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass

version targeted for testing:
 linux07bd6f89f7ff56495c31505985af690c976374d6
baseline version:
 linux1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9

Last test of basis62648  2015-10-03 22:43:24 Z   28 days
Failing since 63225  2015-10-22 22:20:24 Z   10 days8 attempts
Testing same since63336  2015-10-27 17:53:49 Z5 days4 attempts


People who touched revisions under test:
  "Eric W. Biederman" 
  Aaron Conole 
  Adam Radford 
  Adrian Hunter 
  Al Viro 
  Alex Deucher 
  Alexander Couzens 
  Alexey Klimov 
  Andreas Schwab 
  Andrew Morton 
  Andrey Vagin 
  Andy Lutomirski 
  Andy Shevchenko 
  Antoine Tenart 
  Antoine Ténart 
  Ard Biesheuvel 
  Arnaldo Carvalho de Melo 
  Ben Dooks 
  Ben Hutchings 
  Ben Skeggs 
  Brian Norris 
  Charles Keepax 
  Chris Mason 
  Christoph Biedl 
  Christoph Hellwig 
  Christoph Lameter 
  cov...@ccs.covici.com 
  Daniel Vetter 
  Daniel Vetter 
  Dann Frazier 
  Dave Airlie 
  Dave Kleikamp 
  David S. Miller 
  David Vrabel 
  David Woodhouse 
  David Woodhouse 
  Dirk Mueller 
  Dirk Müller 
  Eric Dumazet 
  Eric W. Biederman 
  Eryu Guan 

[Xen-devel] [libvirt test] 63397: regressions - FAIL

2015-11-01 Thread osstest service owner
flight 63397 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63397/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt   5 libvirt-build fail REGR. vs. 63340

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  ac339206bfe98e78925b183cba058d0e2e7f03e3
baseline version:
 libvirt  3c7590e0a435d833895fc7b5be489e53e223ad95

Last test of basis63340  2015-10-28 04:19:47 Z4 days
Failing since 63352  2015-10-29 04:20:29 Z3 days3 attempts
Testing same since63373  2015-10-30 04:21:45 Z2 days2 attempts


People who touched revisions under test:
  Laine Stump 
  Luyao Huang 
  Maxim Perevedentsev 
  Michal Privoznik 
  Roman Bogorodskiy 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  fail
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm blocked 
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt blocked 
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-armhf-armhf-libvirt-qcow2   blocked 
 test-armhf-armhf-libvirt-raw blocked 
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit ac339206bfe98e78925b183cba058d0e2e7f03e3
Author: Laine Stump 
Date:   Thu Oct 29 14:09:59 2015 -0400

util: set max wait for IPv6 DAD to 20 seconds

This was originally set to 5 seconds, but times of 5.5 to 7 seconds
were experienced. Since it's an arbitrary number intended to prevent
an infinite hang, having it a bit too high won't hurt anything, and 20
seconds looks to be adequate (i.e. I think/hope we don't need to make
it tunable in libvirtd.conf)

commit d41a64a1948c88ccec5b4cff34fd04d3aae7a71e
Author: Luyao Huang 
Date:   Thu Oct 29 17:47:33 2015 +0800

util: set error if 

[Xen-devel] [xen-unstable test] 63400: tolerable FAIL - PUSHED

2015-11-01 Thread osstest service owner
flight 63400 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63400/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-startfail in 63375 pass in 63400
 test-armhf-armhf-xl-xsm 16 guest-start/debian.repeat fail in 63375 pass in 
63400
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail pass 
in 63375

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail REGR. vs. 63356
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
like 63356
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 63356
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 63356

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass

version targeted for testing:
 xen  e294a0c3af9f4443dc692b180fb1771b1cb075e8
baseline version:
 xen  b261366f10eb150458d28aa728d399d0a781997e

Last test of basis63356  2015-10-29 10:19:34 Z3 days
Testing same since63375  2015-10-30 07:58:57 Z2 days2 attempts


People who touched revisions under test:
  Andrew Cooper 
  Dario Faggioli 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-oldkern  pass
 build-i386-oldkern   

Re: [Xen-devel] [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing

2015-11-01 Thread Hu, Robert
> -Original Message-
> From: Hu, Robert
> Sent: Monday, November 2, 2015 11:44 AM
> To: 'Ian Jackson' ;
> xen-de...@lists.xenproject.org
> Cc: Ian Campbell 
> Subject: RE: [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing
> 
> > -Original Message-
> > From: Ian Jackson [mailto:ian.jack...@eu.citrix.com]
> > Sent: Saturday, September 26, 2015 3:15 AM
> > To: xen-de...@lists.xenproject.org
> > Cc: Hu, Robert ; Ian Campbell
> > ; Ian Jackson 
> > Subject: [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing
> >
> > This is the second part of v14 Robert Ho's osstest patch series to
> > support nested HVM tests.
> >
> > It is also available here:
> >   git://xenbits.xen.org/people/iwj/xen.git
> >   http://xenbits.xen.org/git-http/people/iwj/xen.git
> > in wip.nested-hvm.v14.part1..wip.nested-hvm.v14
> >
> > Compared to Robert's v13, which was passed to me by private email,
> >  * I have rebased onto current osstest pretest;
> >  * I have changed how selecthost() is told it's dealing with
> >a nested host (in practice, L1 guest);
> >  * There are a large number of minor cleanups;
> >  * There are some new preparatory cleanup and admin patches;
> >  * I have rewritten almost all of the commit messages.
> >
> > However, I have done only VERY LIMITED testing.  Much of the code here
> > is UNTESTED since my changes.  My testing was confined to:
> >  * Verifying that my changes to cs-adjust-flight worked
> >  * Checking that ad-hoc runs of ts-host-reboot and ts-host-powercycle
> >seemed to work when a guest was specified on the command line.
> >
> > Robert, you kindly volunteered to test a revised version of this
> > series.  I would appreciate if you would check that all of this still
> > works as you expect.  I expect there will be some bugs, perhaps even
> > very silly bugs, introduced by me.
> >
> > I noticed that this series lacks guest serial debug keys and log
> > collection for the L1 guest, because there is no
> > Osstest/Serial/guest.pm.  I would appreciate it if you would provide
> > one.  I don't think it needs to actually collect any logs, because the
> > L1 serial output log will be collected as part of the L0 log
> > collection.  But it ought to support sending debug keys to the L1
> > guest.  When you have provided it you can (in the same patch) fix the
> > corresponding `todo' in selecthost, changing `noop' to `guest'.
> [Hu, Robert]
> 
> Hi Ian,
> Are you sure would like me to add this part? I took glance at the module
> code (noop, xenuse, etc.), didn't quite understand.
> I can imitate them for the Serial::guest.pm, but afraid will not that good.
> 
[Hu, Robert] 

Don't quite understand this "\x18\x18\x18" part. What's it for? What's
the meaning of $conswitch? I think it is not needed by guest case.

sub serial_fetch_logs ($) {
my ($ho) = @_;

logm("serial: requesting debug information from $ho->{Name}");

foreach my $mo (@{ $ho->{SerialMethobjs} }) {
$mo->request_debug("\x18\x18\x18",
   "0HMQacdegimnrstuvz",
   "q") or next;
...




> >
> >
> > Workflow:
> >
> > Robert: I'm handing this (what I have called `part 2') over to you
> > now.
> >
> > When you make changes, feel free to either rebase, or to make fixup
> > commits (perhaps in `git-rebase -i --autosquash' format) on top.  If
> > you do the latter then you'll probably want to pass that to me as a
> > git branch (via git push to xenbits or emailing me a git bundle),
> > since `squash!' and `fixup!' commits don't look good in email :-).
> >
> > If you rebase, please put changes
> >v15: 
> > in the commit messages, as I have done myself in v14.  Leave my v14
> > notes in place.
> [Hu, Robert]
> 
> Now I've completed this part of work. Am I going to hand over the v15
> bundle
> to you, with the above unresolved?
> Current changes based on your patch:
> * Some fixed (already get your confirmation) squashed into original patches,
> with
> v15 annotation.
> * 2 fixes (not get your confirmation) are separated as !fixup patch for your
> clear
> review; actually only 1 explicit fixup patch, the other was by mistake
> squashed in
> but I made the annotation clearly.
> * 2 more patches added, you've already been aware of:
> Osstest/Testsupport.pm: change target's default kernkind to 'pvops'
> Osstest/Testsupport.pm: use get_target_property() for some host setup
> 
> 
> >
> > Of course if you have any comments or queries about how I have done
> > things, they would be very welcome.
> >
> > Please do not rebase any of the commits in wip.nested-hvm.v14.part1.
> > If you discover bugs in `part 1' please let us know as I have fed that
> > into the osstest self-test mill with the expectation that it will go
> > into production.
> >
> > I do not expect you to test the changes to cs-adjust-flight.  I have
> > done that.  Indeed they 

[Xen-devel] [PATCH 3/6] xen: factor out allocation of page tables into separate function

2015-11-01 Thread Juergen Gross
Do the allocation of page tables in a separate function. This will
allow to do the allocation at different times of the boot preparations
depending on the features the kernel is supporting.

Signed-off-by: Juergen Gross 
---
 grub-core/loader/i386/xen.c | 82 -
 1 file changed, 51 insertions(+), 31 deletions(-)

diff --git a/grub-core/loader/i386/xen.c b/grub-core/loader/i386/xen.c
index e48cc3f..65cec27 100644
--- a/grub-core/loader/i386/xen.c
+++ b/grub-core/loader/i386/xen.c
@@ -56,6 +56,9 @@ static struct grub_relocator_xen_state state;
 static grub_xen_mfn_t *virt_mfn_list;
 static struct start_info *virt_start_info;
 static grub_xen_mfn_t console_pfn;
+static grub_uint64_t *virt_pgtable;
+static grub_uint64_t pgtbl_start;
+static grub_uint64_t pgtbl_end;
 
 #define PAGE_SIZE 4096
 #define MAX_MODULES (PAGE_SIZE / sizeof (struct xen_multiboot_mod_list))
@@ -106,17 +109,17 @@ get_pgtable_size (grub_uint64_t total_pages, 
grub_uint64_t virt_base)
 
 static void
 generate_page_table (grub_uint64_t *where, grub_uint64_t paging_start,
-grub_uint64_t total_pages, grub_uint64_t virt_base,
-grub_xen_mfn_t *mfn_list)
+grub_uint64_t paging_end, grub_uint64_t total_pages,
+grub_uint64_t virt_base, grub_xen_mfn_t *mfn_list)
 {
   if (!virt_base)
-total_pages++;
+paging_end++;
 
   grub_uint64_t lx[NUMBER_OF_LEVELS], lxs[NUMBER_OF_LEVELS];
   grub_uint64_t nlx, nls, sz = 0;
   int l;
 
-  nlx = total_pages;
+  nlx = paging_end;
   nls = virt_base >> PAGE_SHIFT;
   for (l = 0; l < NUMBER_OF_LEVELS; l++)
 {
@@ -160,7 +163,7 @@ generate_page_table (grub_uint64_t *where, grub_uint64_t 
paging_start,
   if (pr)
 pg += POINTERS_PER_PAGE;
 
-  for (j = 0; j < total_pages; j++)
+  for (j = 0; j < paging_end; j++)
 {
   if (j >= paging_start && j < lp)
pg[j + lxs[0]] = page2offset (mfn_list[j]) | 5;
@@ -261,24 +264,12 @@ grub_xen_special_alloc (void)
 }
 
 static grub_err_t
-grub_xen_boot (void)
+grub_xen_pt_alloc (void)
 {
   grub_relocator_chunk_t ch;
   grub_err_t err;
   grub_uint64_t nr_info_pages;
   grub_uint64_t nr_pages, nr_pt_pages, nr_need_pages;
-  struct gnttab_set_version gnttab_setver;
-  grub_size_t i;
-
-  if (grub_xen_n_allocated_shared_pages)
-return grub_error (GRUB_ERR_BUG, "active grants");
-
-  err = grub_xen_p2m_alloc ();
-  if (err)
-return err;
-  err = grub_xen_special_alloc ();
-  if (err)
-return err;
 
   next_start.pt_base = max_addr + xen_inf.virt_base;
   state.paging_start = max_addr >> PAGE_SHIFT;
@@ -298,30 +289,59 @@ grub_xen_boot (void)
   nr_pages = nr_need_pages;
 }
 
-  grub_dprintf ("xen", "bootstrap domain %llx+%llx\n",
-   (unsigned long long) xen_inf.virt_base,
-   (unsigned long long) page2offset (nr_pages));
-
   err = grub_relocator_alloc_chunk_addr (relocator, ,
 max_addr, page2offset (nr_pt_pages));
   if (err)
 return err;
 
+  virt_pgtable = get_virtual_current_address (ch);
+  pgtbl_start = max_addr >> PAGE_SHIFT;
+  max_addr += page2offset (nr_pt_pages);
+  state.stack = max_addr + STACK_SIZE + xen_inf.virt_base;
+  state.paging_size = nr_pt_pages;
+  next_start.nr_pt_frames = nr_pt_pages;
+  max_addr = page2offset (nr_pages);
+  pgtbl_end = nr_pages;
+
+  return GRUB_ERR_NONE;
+}
+
+static grub_err_t
+grub_xen_boot (void)
+{
+  grub_err_t err;
+  grub_uint64_t nr_pages;
+  struct gnttab_set_version gnttab_setver;
+  grub_size_t i;
+
+  if (grub_xen_n_allocated_shared_pages)
+return grub_error (GRUB_ERR_BUG, "active grants");
+
+  err = grub_xen_p2m_alloc ();
+  if (err)
+return err;
+  err = grub_xen_special_alloc ();
+  if (err)
+return err;
+  err = grub_xen_pt_alloc ();
+  if (err)
+return err;
+
   err = set_mfns (console_pfn);
   if (err)
 return err;
 
-  generate_page_table (get_virtual_current_address (ch),
-  max_addr >> PAGE_SHIFT, nr_pages,
+  nr_pages = max_addr >> PAGE_SHIFT;
+
+  grub_dprintf ("xen", "bootstrap domain %llx+%llx\n",
+   (unsigned long long) xen_inf.virt_base,
+   (unsigned long long) page2offset (nr_pages));
+
+  generate_page_table (virt_pgtable, pgtbl_start, pgtbl_end, nr_pages,
   xen_inf.virt_base, virt_mfn_list);
 
-  max_addr += page2offset (nr_pt_pages);
-  state.stack = max_addr + STACK_SIZE + xen_inf.virt_base;
   state.entry_point = xen_inf.entry_point;
 
-  next_start.nr_pt_frames = nr_pt_pages;
-  state.paging_size = nr_pt_pages;
-
   *virt_start_info = next_start;
 
   grub_memset (_setver, 0, sizeof (gnttab_setver));
@@ -335,8 +355,8 @@ grub_xen_boot (void)
   return grub_relocator_xen_boot (relocator, state, nr_pages,
  xen_inf.virt_base <
  PAGE_SIZE ? page2offset (nr_pages) : 0,
- nr_pages 

[Xen-devel] [PATCH 5/6] xen: modify page table construction

2015-11-01 Thread Juergen Gross
Modify the page table construction to allow multiple virtual regions
to be mapped. This is done as preparation for removing the p2m list
from the initial kernel mapping in order to support huge pv domains.

This allows a cleaner approach for mapping the relocator page by
using this capability.

The interface to the assembler level of the relocator has to be changed
in order to be able to process multiple page table areas.

Signed-off-by: Juergen Gross 
---
 grub-core/lib/i386/xen/relocator.S   |  47 +++---
 grub-core/lib/x86_64/xen/relocator.S |  41 +++--
 grub-core/lib/xen/relocator.c|  22 ++-
 grub-core/loader/i386/xen.c  | 313 +++
 include/grub/xen/relocator.h |   6 +-
 5 files changed, 276 insertions(+), 153 deletions(-)

diff --git a/grub-core/lib/i386/xen/relocator.S 
b/grub-core/lib/i386/xen/relocator.S
index 694a54c..c23b405 100644
--- a/grub-core/lib/i386/xen/relocator.S
+++ b/grub-core/lib/i386/xen/relocator.S
@@ -50,41 +50,45 @@ VARIABLE(grub_relocator_xen_remapper_map_high)
jmp *%ebx
 
 LOCAL(cont):
-   xorl%eax, %eax
-   movl%eax, %ebp
+   /* mov imm32, %eax */
+   .byte   0xb8
+VARIABLE(grub_relocator_xen_paging_areas_addr)
+   .long   0
+   movl%eax, %ebx
 1:
-
+   movl0(%ebx), %ebp
+   movl4(%ebx), %ecx
+   testl   %ecx, %ecx
+   jz  3f
+   addl$8, %ebx
+   movl%ebx, %esp
+
+2:
+   movl%ecx, %edi
/* mov imm32, %eax */
.byte   0xb8
 VARIABLE(grub_relocator_xen_mfn_list)
.long   0
-   movl%eax, %edi
-   movl%ebp, %eax
-   movl0(%edi, %eax, 4), %ecx
-
-   /* mov imm32, %ebx */
-   .byte   0xbb
-VARIABLE(grub_relocator_xen_paging_start)
-   .long   0
-   shll$12, %eax
-   addl%eax, %ebx
+   movl0(%eax, %ebp, 4), %ecx
+   movl%ebp, %ebx
+   shll$12, %ebx
movl%ecx, %edx
shll$12,  %ecx
shrl$20,  %edx
orl $5, %ecx
movl$2, %esi
movl$__HYPERVISOR_update_va_mapping, %eax
-   int $0x82
+   int $0x82   /* parameters: eax, ebx, ecx, edx, esi */
 
incl%ebp
-   /* mov imm32, %ecx */
-   .byte   0xb9
-VARIABLE(grub_relocator_xen_paging_size)
-   .long   0
-   cmpl%ebp, %ecx
+   movl%edi, %ecx
+
+   loop2b
 
-   ja  1b
+   mov %esp, %ebx
+   jmp 1b
 
+3:
/* mov imm32, %ebx */
.byte   0xbb
 VARIABLE(grub_relocator_xen_mmu_op_addr)
@@ -102,6 +106,9 @@ VARIABLE(grub_relocator_xen_remap_continue)
 
jmp *%eax
 
+VARIABLE(grub_relocator_xen_paging_areas)
+   .long   0, 0, 0, 0, 0, 0, 0, 0
+
 VARIABLE(grub_relocator_xen_mmu_op)
.space 256
 
diff --git a/grub-core/lib/x86_64/xen/relocator.S 
b/grub-core/lib/x86_64/xen/relocator.S
index 78c1233..dbb90c7 100644
--- a/grub-core/lib/x86_64/xen/relocator.S
+++ b/grub-core/lib/x86_64/xen/relocator.S
@@ -50,31 +50,24 @@ VARIABLE(grub_relocator_xen_remapper_map)
 
 LOCAL(cont):

-   /* mov imm64, %rcx */
-   .byte   0x48
-   .byte   0xb9
-VARIABLE(grub_relocator_xen_paging_size)
-   .quad   0
-
-   /* mov imm64, %rax */
-   .byte   0x48
-   .byte   0xb8
-VARIABLE(grub_relocator_xen_paging_start)
-   .quad   0
-
-   movq%rax, %r12
-
/* mov imm64, %rax */
.byte   0x48
.byte   0xb8
 VARIABLE(grub_relocator_xen_mfn_list)
.quad   0
 
-   movq%rax, %rsi
+   movq%rax, %rbx
+   leaqEXT_C(grub_relocator_xen_paging_areas) (%rip), %r8
+
 1:
+   movq0(%r8), %r12
+   movq8(%r8), %rcx
+   testq   %rcx, %rcx
+   jz  3f
+2:
movq%r12, %rdi
-   movq%rsi, %rbx
-   movq0(%rsi), %rsi
+   shlq$12, %rdi
+   movq(%rbx, %r12, 8), %rsi
shlq$12,  %rsi
orq $5, %rsi
movq$2, %rdx
@@ -83,12 +76,14 @@ VARIABLE(grub_relocator_xen_mfn_list)
syscall
 
movq%r9, %rcx
-   addq$8, %rbx
-   addq$4096, %r12
-   movq%rbx, %rsi
+   incq%r12
+
+   loop 2b
 
-   loop 1b
+   addq$16, %r8
+   jmp 1b
 
+3:
leaq   EXT_C(grub_relocator_xen_mmu_op) (%rip), %rdi
movq   $3, %rsi
movq   $0, %rdx
@@ -104,6 +99,10 @@ VARIABLE(grub_relocator_xen_remap_continue)
 
jmp *%rax
 
+VARIABLE(grub_relocator_xen_paging_areas)
+   /* array of start, size pairs, size 0 is end marker */
+   .quad   0, 0, 0, 0, 0, 0, 0, 0
+
 VARIABLE(grub_relocator_xen_mmu_op)
.space 256
 
diff --git a/grub-core/lib/xen/relocator.c b/grub-core/lib/xen/relocator.c
index 8f427d3..bc29055 100644
--- a/grub-core/lib/xen/relocator.c
+++ b/grub-core/lib/xen/relocator.c
@@ -36,15 +36,15 @@ extern grub_uint8_t grub_relocator_xen_remap_end;
 extern grub_xen_reg_t 

[Xen-devel] [PATCH 0/6] grub-xen: support booting huge pv-domains

2015-11-01 Thread Juergen Gross
The Xen hypervisor supports starting a dom0 with large memory (up to
the TB range) by not including the initrd and p2m list in the initial
kernel mapping. Especially the p2m list can grow larger than the
available virtual space in the initial mapping.

The started kernel is indicating the support of each feature via
elf notes.

This series enables grub-xen to do the same as the hypervisor.

Tested with:
- 32 bit domU (kernel not supporting unmapped initrd)
- 32 bit domU (kernel supporting unmapped initrd)
- 1 GB 64 bit domU (kernel supporting unmapped initrd, not p2m)
- 1 GB 64 bit domU (kernel supporting unmapped initrd and p2m)
- 900GB 64 bit domU (kernel supporting unmapped initrd and p2m)


Juergen Gross (6):
  xen: factor out p2m list allocation into separate function
  xen: factor out allocation of special pages into separate function
  xen: factor out allocation of page tables into separate function
  xen: add capability to load initrd outside of initial mapping
  xen: modify page table construction
  xen: add capability to load p2m list outside of kernel mapping

 grub-core/lib/i386/xen/relocator.S   |  47 ++--
 grub-core/lib/x86_64/xen/relocator.S |  41 ++-
 grub-core/lib/xen/relocator.c|  22 +-
 grub-core/loader/i386/xen.c  | 521 +--
 grub-core/loader/i386/xen_fileXX.c   |   7 +
 include/grub/xen/relocator.h |   6 +-
 include/grub/xen_file.h  |   3 +
 7 files changed, 446 insertions(+), 201 deletions(-)

-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 1/6] xen: factor out p2m list allocation into separate function

2015-11-01 Thread Juergen Gross
Do the p2m list allocation of the to be loaded kernel in a separate
function. This will allow doing the p2m list allocation at different
times of the boot preparations depending on the features the kernel
is supporting.

While at this remove superfluous setting of first_p2m_pfn and
nr_p2m_frames as those are needed only in case of the p2m list not
being mapped by the initial kernel mapping.

Signed-off-by: Juergen Gross 
---
 grub-core/loader/i386/xen.c | 70 ++---
 1 file changed, 40 insertions(+), 30 deletions(-)

diff --git a/grub-core/loader/i386/xen.c b/grub-core/loader/i386/xen.c
index c4d9689..42ed7c7 100644
--- a/grub-core/loader/i386/xen.c
+++ b/grub-core/loader/i386/xen.c
@@ -52,6 +52,8 @@ static struct grub_xen_file_info xen_inf;
 static struct xen_multiboot_mod_list *xen_module_info_page;
 static grub_uint64_t modules_target_start;
 static grub_size_t n_modules;
+static struct grub_relocator_xen_state state;
+static grub_xen_mfn_t *virt_mfn_list;
 
 #define PAGE_SIZE 4096
 #define MAX_MODULES (PAGE_SIZE / sizeof (struct xen_multiboot_mod_list))
@@ -166,7 +168,7 @@ generate_page_table (grub_uint64_t *where, grub_uint64_t 
paging_start,
 }
 
 static grub_err_t
-set_mfns (grub_xen_mfn_t * new_mfn_list, grub_xen_mfn_t pfn)
+set_mfns (grub_xen_mfn_t pfn)
 {
   grub_xen_mfn_t i, t;
   grub_xen_mfn_t cn_pfn = -1, st_pfn = -1;
@@ -175,32 +177,32 @@ set_mfns (grub_xen_mfn_t * new_mfn_list, grub_xen_mfn_t 
pfn)
 
   for (i = 0; i < grub_xen_start_page_addr->nr_pages; i++)
 {
-  if (new_mfn_list[i] == grub_xen_start_page_addr->console.domU.mfn)
+  if (virt_mfn_list[i] == grub_xen_start_page_addr->console.domU.mfn)
cn_pfn = i;
-  if (new_mfn_list[i] == grub_xen_start_page_addr->store_mfn)
+  if (virt_mfn_list[i] == grub_xen_start_page_addr->store_mfn)
st_pfn = i;
 }
   if (cn_pfn == (grub_xen_mfn_t)-1)
 return grub_error (GRUB_ERR_BUG, "no console");
   if (st_pfn == (grub_xen_mfn_t)-1)
 return grub_error (GRUB_ERR_BUG, "no store");
-  t = new_mfn_list[pfn];
-  new_mfn_list[pfn] = new_mfn_list[cn_pfn];
-  new_mfn_list[cn_pfn] = t;
-  t = new_mfn_list[pfn + 1];
-  new_mfn_list[pfn + 1] = new_mfn_list[st_pfn];
-  new_mfn_list[st_pfn] = t;
-
-  m2p_updates[0].ptr = page2offset (new_mfn_list[pfn]) | MMU_MACHPHYS_UPDATE;
+  t = virt_mfn_list[pfn];
+  virt_mfn_list[pfn] = virt_mfn_list[cn_pfn];
+  virt_mfn_list[cn_pfn] = t;
+  t = virt_mfn_list[pfn + 1];
+  virt_mfn_list[pfn + 1] = virt_mfn_list[st_pfn];
+  virt_mfn_list[st_pfn] = t;
+
+  m2p_updates[0].ptr = page2offset (virt_mfn_list[pfn]) | MMU_MACHPHYS_UPDATE;
   m2p_updates[0].val = pfn;
   m2p_updates[1].ptr =
-page2offset (new_mfn_list[pfn + 1]) | MMU_MACHPHYS_UPDATE;
+page2offset (virt_mfn_list[pfn + 1]) | MMU_MACHPHYS_UPDATE;
   m2p_updates[1].val = pfn + 1;
   m2p_updates[2].ptr =
-page2offset (new_mfn_list[cn_pfn]) | MMU_MACHPHYS_UPDATE;
+page2offset (virt_mfn_list[cn_pfn]) | MMU_MACHPHYS_UPDATE;
   m2p_updates[2].val = cn_pfn;
   m2p_updates[3].ptr =
-page2offset (new_mfn_list[st_pfn]) | MMU_MACHPHYS_UPDATE;
+page2offset (virt_mfn_list[st_pfn]) | MMU_MACHPHYS_UPDATE;
   m2p_updates[3].val = st_pfn;
 
   grub_xen_mmu_update (m2p_updates, 4, NULL, DOMID_SELF);
@@ -209,34 +211,43 @@ set_mfns (grub_xen_mfn_t * new_mfn_list, grub_xen_mfn_t 
pfn)
 }
 
 static grub_err_t
+grub_xen_p2m_alloc (void)
+{
+  grub_relocator_chunk_t ch;
+  grub_size_t p2msize;
+  grub_err_t err;
+
+  state.mfn_list = max_addr;
+  next_start.mfn_list = max_addr + xen_inf.virt_base;
+  p2msize = sizeof (grub_xen_mfn_t) * grub_xen_start_page_addr->nr_pages;
+  err = grub_relocator_alloc_chunk_addr (relocator, , max_addr, p2msize);
+  if (err)
+return err;
+  virt_mfn_list = get_virtual_current_address (ch);
+  grub_memcpy (virt_mfn_list,
+  (void *) grub_xen_start_page_addr->mfn_list, p2msize);
+  max_addr = ALIGN_UP (max_addr + p2msize, PAGE_SIZE);
+
+  return GRUB_ERR_NONE;
+}
+
+static grub_err_t
 grub_xen_boot (void)
 {
-  struct grub_relocator_xen_state state;
   grub_relocator_chunk_t ch;
   grub_err_t err;
-  grub_size_t pgtsize;
   struct start_info *nst;
   grub_uint64_t nr_info_pages;
   grub_uint64_t nr_pages, nr_pt_pages, nr_need_pages;
   struct gnttab_set_version gnttab_setver;
-  grub_xen_mfn_t *new_mfn_list;
   grub_size_t i;
 
   if (grub_xen_n_allocated_shared_pages)
 return grub_error (GRUB_ERR_BUG, "active grants");
 
-  state.mfn_list = max_addr;
-  next_start.mfn_list = max_addr + xen_inf.virt_base;
-  next_start.first_p2m_pfn = max_addr >> PAGE_SHIFT;   /* Is this right? */
-  pgtsize = sizeof (grub_xen_mfn_t) * grub_xen_start_page_addr->nr_pages;
-  err = grub_relocator_alloc_chunk_addr (relocator, , max_addr, pgtsize);
-  next_start.nr_p2m_frames = (pgtsize + PAGE_SIZE - 1) >> PAGE_SHIFT;
+  err = grub_xen_p2m_alloc ();
   if (err)
 return err;
-  new_mfn_list = get_virtual_current_address (ch);
-  

[Xen-devel] [PATCH 4/6] xen: add capability to load initrd outside of initial mapping

2015-11-01 Thread Juergen Gross
Modern pvops linux kernels support an initrd not covered by the initial
mapping. This capability is flagged by an elf-note.

In case the elf-note is set by the kernel don't place the initrd into
the initial mapping. This will allow to load larger initrds and/or
support domains with larger memory, as the initial mapping is limited
to 2GB and it is containing the p2m list.

Signed-off-by: Juergen Gross 
---
 grub-core/loader/i386/xen.c| 56 ++
 grub-core/loader/i386/xen_fileXX.c |  3 ++
 include/grub/xen_file.h|  1 +
 3 files changed, 49 insertions(+), 11 deletions(-)

diff --git a/grub-core/loader/i386/xen.c b/grub-core/loader/i386/xen.c
index 65cec27..0f41048 100644
--- a/grub-core/loader/i386/xen.c
+++ b/grub-core/loader/i386/xen.c
@@ -307,15 +307,14 @@ grub_xen_pt_alloc (void)
 }
 
 static grub_err_t
-grub_xen_boot (void)
+grub_xen_alloc_end (void)
 {
   grub_err_t err;
-  grub_uint64_t nr_pages;
-  struct gnttab_set_version gnttab_setver;
-  grub_size_t i;
+  static int called = 0;
 
-  if (grub_xen_n_allocated_shared_pages)
-return grub_error (GRUB_ERR_BUG, "active grants");
+  if (called)
+return GRUB_ERR_NONE;
+  called = 1;
 
   err = grub_xen_p2m_alloc ();
   if (err)
@@ -327,6 +326,24 @@ grub_xen_boot (void)
   if (err)
 return err;
 
+  return GRUB_ERR_NONE;
+}
+
+static grub_err_t
+grub_xen_boot (void)
+{
+  grub_err_t err;
+  grub_uint64_t nr_pages;
+  struct gnttab_set_version gnttab_setver;
+  grub_size_t i;
+
+  if (grub_xen_n_allocated_shared_pages)
+return grub_error (GRUB_ERR_BUG, "active grants");
+
+  err = grub_xen_alloc_end ();
+  if (err)
+return err;
+
   err = set_mfns (console_pfn);
   if (err)
 return err;
@@ -587,6 +604,13 @@ grub_cmd_initrd (grub_command_t cmd __attribute__ 
((unused)),
   goto fail;
 }
 
+  if (xen_inf.unmapped_initrd)
+{
+  err = grub_xen_alloc_end ();
+  if (err)
+goto fail;
+}
+
   if (grub_initrd_init (argc, argv, _ctx))
 goto fail;
 
@@ -603,13 +627,22 @@ grub_cmd_initrd (grub_command_t cmd __attribute__ 
((unused)),
goto fail;
 }
 
-  next_start.mod_start = max_addr + xen_inf.virt_base;
-  next_start.mod_len = size;
-
-  max_addr = ALIGN_UP (max_addr + size, PAGE_SIZE);
+  if (xen_inf.unmapped_initrd)
+{
+  next_start.flags |= SIF_MOD_START_PFN;
+  next_start.mod_start = max_addr >> PAGE_SHIFT;
+  next_start.mod_len = size;
+}
+  else
+{
+  next_start.mod_start = max_addr + xen_inf.virt_base;
+  next_start.mod_len = size;
+}
 
   grub_dprintf ("xen", "Initrd, addr=0x%x, size=0x%x\n",
-   (unsigned) next_start.mod_start, (unsigned) size);
+   (unsigned) (max_addr + xen_inf.virt_base), (unsigned) size);
+
+  max_addr = ALIGN_UP (max_addr + size, PAGE_SIZE);
 
 fail:
   grub_initrd_close (_ctx);
@@ -660,6 +693,7 @@ grub_cmd_module (grub_command_t cmd __attribute__ 
((unused)),
 
   if (!xen_module_info_page)
 {
+  xen_inf.unmapped_initrd = 0;
   n_modules = 0;
   max_addr = ALIGN_UP (max_addr, PAGE_SIZE);
   modules_target_start = max_addr;
diff --git a/grub-core/loader/i386/xen_fileXX.c 
b/grub-core/loader/i386/xen_fileXX.c
index 1ba5649..69fccd2 100644
--- a/grub-core/loader/i386/xen_fileXX.c
+++ b/grub-core/loader/i386/xen_fileXX.c
@@ -253,6 +253,9 @@ parse_note (grub_elf_t elf, struct grub_xen_file_info *xi,
  descsz == 2 ? 2 : 3) == 0)
xi->arch = GRUB_XEN_FILE_I386;
  break;
+   case 16:
+ xi->unmapped_initrd = !!grub_le_to_cpu32(*(grub_uint32_t *) desc);
+ break;
default:
  grub_dprintf ("xen", "unknown note type %d\n", nh->n_type);
  break;
diff --git a/include/grub/xen_file.h b/include/grub/xen_file.h
index 4b2ccba..ed749fa 100644
--- a/include/grub/xen_file.h
+++ b/include/grub/xen_file.h
@@ -36,6 +36,7 @@ struct grub_xen_file_info
   int has_note;
   int has_xen_guest;
   int extended_cr3;
+  int unmapped_initrd;
   enum
   {
 GRUB_XEN_FILE_I386 = 1,
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 2/6] xen: factor out allocation of special pages into separate function

2015-11-01 Thread Juergen Gross
Do the allocation of special pages (start info, console and xenbus
ring buffers) in a separate function. This will allow to do the
allocation at different times of the boot preparations depending on
the features the kernel is supporting.

Signed-off-by: Juergen Gross 
---
 grub-core/loader/i386/xen.c | 50 +
 1 file changed, 32 insertions(+), 18 deletions(-)

diff --git a/grub-core/loader/i386/xen.c b/grub-core/loader/i386/xen.c
index 42ed7c7..e48cc3f 100644
--- a/grub-core/loader/i386/xen.c
+++ b/grub-core/loader/i386/xen.c
@@ -54,6 +54,8 @@ static grub_uint64_t modules_target_start;
 static grub_size_t n_modules;
 static struct grub_relocator_xen_state state;
 static grub_xen_mfn_t *virt_mfn_list;
+static struct start_info *virt_start_info;
+static grub_xen_mfn_t console_pfn;
 
 #define PAGE_SIZE 4096
 #define MAX_MODULES (PAGE_SIZE / sizeof (struct xen_multiboot_mod_list))
@@ -232,43 +234,51 @@ grub_xen_p2m_alloc (void)
 }
 
 static grub_err_t
-grub_xen_boot (void)
+grub_xen_special_alloc (void)
 {
   grub_relocator_chunk_t ch;
   grub_err_t err;
-  struct start_info *nst;
-  grub_uint64_t nr_info_pages;
-  grub_uint64_t nr_pages, nr_pt_pages, nr_need_pages;
-  struct gnttab_set_version gnttab_setver;
-  grub_size_t i;
-
-  if (grub_xen_n_allocated_shared_pages)
-return grub_error (GRUB_ERR_BUG, "active grants");
-
-  err = grub_xen_p2m_alloc ();
-  if (err)
-return err;
 
   err = grub_relocator_alloc_chunk_addr (relocator, ,
 max_addr, sizeof (next_start));
   if (err)
 return err;
   state.start_info = max_addr + xen_inf.virt_base;
-  nst = get_virtual_current_address (ch);
+  virt_start_info = get_virtual_current_address (ch);
   max_addr = ALIGN_UP (max_addr + sizeof (next_start), PAGE_SIZE);
+  console_pfn = max_addr >> PAGE_SHIFT;
+  max_addr += 2 * PAGE_SIZE;
 
   next_start.nr_pages = grub_xen_start_page_addr->nr_pages;
   grub_memcpy (next_start.magic, grub_xen_start_page_addr->magic,
   sizeof (next_start.magic));
+  next_start.shared_info = grub_xen_start_page_addr->shared_info;
   next_start.store_mfn = grub_xen_start_page_addr->store_mfn;
   next_start.store_evtchn = grub_xen_start_page_addr->store_evtchn;
   next_start.console.domU = grub_xen_start_page_addr->console.domU;
-  next_start.shared_info = grub_xen_start_page_addr->shared_info;
 
-  err = set_mfns (max_addr >> PAGE_SHIFT);
+  return GRUB_ERR_NONE;
+}
+
+static grub_err_t
+grub_xen_boot (void)
+{
+  grub_relocator_chunk_t ch;
+  grub_err_t err;
+  grub_uint64_t nr_info_pages;
+  grub_uint64_t nr_pages, nr_pt_pages, nr_need_pages;
+  struct gnttab_set_version gnttab_setver;
+  grub_size_t i;
+
+  if (grub_xen_n_allocated_shared_pages)
+return grub_error (GRUB_ERR_BUG, "active grants");
+
+  err = grub_xen_p2m_alloc ();
+  if (err)
+return err;
+  err = grub_xen_special_alloc ();
   if (err)
 return err;
-  max_addr += 2 * PAGE_SIZE;
 
   next_start.pt_base = max_addr + xen_inf.virt_base;
   state.paging_start = max_addr >> PAGE_SHIFT;
@@ -297,6 +307,10 @@ grub_xen_boot (void)
   if (err)
 return err;
 
+  err = set_mfns (console_pfn);
+  if (err)
+return err;
+
   generate_page_table (get_virtual_current_address (ch),
   max_addr >> PAGE_SHIFT, nr_pages,
   xen_inf.virt_base, virt_mfn_list);
@@ -308,7 +322,7 @@ grub_xen_boot (void)
   next_start.nr_pt_frames = nr_pt_pages;
   state.paging_size = nr_pt_pages;
 
-  *nst = next_start;
+  *virt_start_info = next_start;
 
   grub_memset (_setver, 0, sizeof (gnttab_setver));
 
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 6/6] xen: add capability to load p2m list outside of kernel mapping

2015-11-01 Thread Juergen Gross
Modern pvops linux kernels support a p2m list not covered by the
kernel mapping. This capability is flagged by an elf-note specifying
the virtual address the kernel is expecting the p2m list to be mapped
to.

In case the elf-note is set by the kernel don't place the p2m list
into the kernel mapping, but map it to the given address. This will
allow to support domains with larger memory, as the kernel mapping is
limited to 2GB and a domain with huge memory in the TB range will have
a p2m list larger than this.

Signed-off-by: Juergen Gross 
---
 grub-core/loader/i386/xen.c| 50 --
 grub-core/loader/i386/xen_fileXX.c |  4 +++
 include/grub/xen_file.h|  2 ++
 3 files changed, 48 insertions(+), 8 deletions(-)

diff --git a/grub-core/loader/i386/xen.c b/grub-core/loader/i386/xen.c
index 5e10420..9ddc6c2 100644
--- a/grub-core/loader/i386/xen.c
+++ b/grub-core/loader/i386/xen.c
@@ -305,19 +305,44 @@ static grub_err_t
 grub_xen_p2m_alloc (void)
 {
   grub_relocator_chunk_t ch;
-  grub_size_t p2msize;
+  grub_size_t p2msize, p2malloc;
   grub_err_t err;
+  struct grub_xen_mapping *map;
+
+  map = mappings + n_mappings;
+  p2msize = ALIGN_UP (sizeof (grub_xen_mfn_t) *
+ grub_xen_start_page_addr->nr_pages, PAGE_SIZE);
+  if (xen_inf.has_p2m_base)
+{
+  err = get_pgtable_size (xen_inf.p2m_base, xen_inf.p2m_base + p2msize,
+ (max_addr + p2msize) >> PAGE_SHIFT);
+  if (err)
+   return err;
+
+  map->area.pfn_start = max_addr >> PAGE_SHIFT;
+  p2malloc = p2msize + page2offset (map->area.n_pt_pages);
+  n_mappings++;
+  next_start.mfn_list = xen_inf.p2m_base;
+  next_start.first_p2m_pfn = map->area.pfn_start;
+  next_start.nr_p2m_frames = p2malloc >> PAGE_SHIFT;
+}
+  else
+{
+  next_start.mfn_list = max_addr + xen_inf.virt_base;
+  p2malloc = p2msize;
+}
 
   state.mfn_list = max_addr;
-  next_start.mfn_list = max_addr + xen_inf.virt_base;
-  p2msize = sizeof (grub_xen_mfn_t) * grub_xen_start_page_addr->nr_pages;
-  err = grub_relocator_alloc_chunk_addr (relocator, , max_addr, p2msize);
+  err = grub_relocator_alloc_chunk_addr (relocator, , max_addr, p2malloc);
   if (err)
 return err;
   virt_mfn_list = get_virtual_current_address (ch);
+  if (xen_inf.has_p2m_base)
+map->where = (grub_uint64_t *) virt_mfn_list +
+p2msize / sizeof (grub_uint64_t);
   grub_memcpy (virt_mfn_list,
   (void *) grub_xen_start_page_addr->mfn_list, p2msize);
-  max_addr = ALIGN_UP (max_addr + p2msize, PAGE_SIZE);
+  max_addr += p2malloc;
 
   return GRUB_ERR_NONE;
 }
@@ -425,9 +450,12 @@ grub_xen_alloc_end (void)
 return GRUB_ERR_NONE;
   called = 1;
 
-  err = grub_xen_p2m_alloc ();
-  if (err)
-return err;
+  if (!xen_inf.has_p2m_base)
+{
+  err = grub_xen_p2m_alloc ();
+  if (err)
+   return err;
+}
   err = grub_xen_special_alloc ();
   if (err)
 return err;
@@ -452,6 +480,12 @@ grub_xen_boot (void)
   err = grub_xen_alloc_end ();
   if (err)
 return err;
+  if (xen_inf.has_p2m_base)
+{
+  err = grub_xen_p2m_alloc ();
+  if (err)
+   return err;
+}
 
   err = set_mfns (console_pfn);
   if (err)
diff --git a/grub-core/loader/i386/xen_fileXX.c 
b/grub-core/loader/i386/xen_fileXX.c
index 69fccd2..8d01adb 100644
--- a/grub-core/loader/i386/xen_fileXX.c
+++ b/grub-core/loader/i386/xen_fileXX.c
@@ -253,6 +253,10 @@ parse_note (grub_elf_t elf, struct grub_xen_file_info *xi,
  descsz == 2 ? 2 : 3) == 0)
xi->arch = GRUB_XEN_FILE_I386;
  break;
+   case 15:
+ xi->p2m_base = grub_le_to_cpu_addr (*(Elf_Addr *) desc);
+ xi->has_p2m_base = 1;
+ break;
case 16:
  xi->unmapped_initrd = !!grub_le_to_cpu32(*(grub_uint32_t *) desc);
  break;
diff --git a/include/grub/xen_file.h b/include/grub/xen_file.h
index ed749fa..6587999 100644
--- a/include/grub/xen_file.h
+++ b/include/grub/xen_file.h
@@ -32,9 +32,11 @@ struct grub_xen_file_info
   grub_uint64_t entry_point;
   grub_uint64_t hypercall_page;
   grub_uint64_t paddr_offset;
+  grub_uint64_t p2m_base;
   int has_hypercall_page;
   int has_note;
   int has_xen_guest;
+  int has_p2m_base;
   int extended_cr3;
   int unmapped_initrd;
   enum
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] ovmf fail to compile

2015-11-01 Thread Hao, Xudong
Hi,

Does anyone meet build OVMF failure? The ovmf is xen default repo: 
http://xenbits.xen.org/git-http/ovmf.git, with latest commit 
af9785a9ed61daea52b47f0bf448f1f228beee1e, and OS is X86_64 RHEL6.6.


...
make[1]: *** 
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/Build/OvmfX64/DEBUG_GCC44/X64/OvmfPkg/Sec/SecMain/DEBUG/SecMain.dll]
 Error 1


build.py...
: error 7000: Failed to execute command
   make tbuild 
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/Build/OvmfX64/DEBUG_GCC44/X64/OvmfPkg/Sec/SecMain]


build.py...
: error 7000: Failed to execute command
   make tbuild 
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/Build/OvmfX64/DEBUG_GCC44/X64/MdeModulePkg/Universal/PCD/Pei/Pcd]


build.py...
: error 7000: Failed to execute command
   make tbuild 
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/Build/OvmfX64/DEBUG_GCC44/X64/MdeModulePkg/Core/Pei/PeiMain]


build.py...
: error 7000: Failed to execute command
   make tbuild 
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/Build/OvmfX64/DEBUG_GCC44/X64/IntelFrameworkModulePkg/Universal/StatusCode/Pei/StatusCodePei]


build.py...
: error 7000: Failed to execute command
   make tbuild 
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/Build/OvmfX64/DEBUG_GCC44/X64/MdeModulePkg/Core/DxeIplPeim/DxeIpl]


build.py...
: error F002: Failed to build module
   
/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/ovmf-dir/OvmfPkg/Sec/SecMain.inf
 [X64, GCC44, DEBUG]


Best Regards
Xudong


error.log
Description: error.log
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 02/10] xen/blkfront: separate per ring information out of device info

2015-11-01 Thread Bob Liu

On 11/02/2015 12:49 PM, kbuild test robot wrote:
> Hi Bob,
> 
> [auto build test ERROR on v4.3-rc7 -- if it's inappropriate base, please 
> suggest rules for selecting the more suitable base]
> 
> url:
> https://github.com/0day-ci/linux/commits/Bob-Liu/xen-block-multi-hardware-queues-rings-support/20151102-122806
> config: x86_64-allyesconfig (attached as .config)
> reproduce:
> # save the attached .config to linux build tree
> make ARCH=x86_64 
> 
> Note: the 
> linux-review/Bob-Liu/xen-block-multi-hardware-queues-rings-support/20151102-122806
>  HEAD b29fe44b095649f8faddc4474daba13199c1f5e0 builds fine.
>   It only hurts bisectibility.
> 
> All errors (new ones prefixed by >>):
> 
>drivers/block/xen-blkfront.c: In function 'blkif_queue_rq':
>>> drivers/block/xen-blkfront.c:639:17: error: 'info' undeclared (first use in 
>>> this function)
>  spin_lock_irq(>io_lock);
> ^
>drivers/block/xen-blkfront.c:639:17: note: each undeclared identifier is 
> reported only once for each function it appears in
> 

Sorry, I didn't do the compile-test after each patch.

Here is the fix and will update in next version.
[root@x4-4 linux]# git diff
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 2a557e4..7face5e 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -634,6 +634,7 @@ static int blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
   const struct blk_mq_queue_data *qd)
 {
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info 
*)hctx->driver_data;
+   struct blkfront_info *info = rinfo->dev_info;
 
blk_mq_start_request(qd->rq);
spin_lock_irq(>io_lock);
[root@x4-4 linux]# 

Thanks,
-Bob

> vim +/info +639 drivers/block/xen-blkfront.c
> 
> 907c3eb18 Bob Liu 2015-07-13  633  static int 
> blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
> 907c3eb18 Bob Liu 2015-07-13  634
> const struct blk_mq_queue_data *qd)
> 9f27ee595 Jeremy Fitzhardinge 2007-07-17  635  {
> 2a8974fd4 Bob Liu 2015-11-02  636 struct 
> blkfront_ring_info *rinfo = (struct blkfront_ring_info *)hctx->driver_data;
> 9f27ee595 Jeremy Fitzhardinge 2007-07-17  637  
> 907c3eb18 Bob Liu 2015-07-13  638 
> blk_mq_start_request(qd->rq);
> 907c3eb18 Bob Liu 2015-07-13 @639 
> spin_lock_irq(>io_lock);
> 2a8974fd4 Bob Liu 2015-11-02  640 if 
> (RING_FULL(>ring))
> 907c3eb18 Bob Liu 2015-07-13  641 goto out_busy;
> 9f27ee595 Jeremy Fitzhardinge 2007-07-17  642  
> 
> :: The code at line 639 was first introduced by commit
> :: 907c3eb18e0bd86ca12a9de80befe8e3647bac3e xen-blkfront: convert to 
> blk-mq APIs
> 
> :: TO: Bob Liu 
> :: CC: David Vrabel 
> 
> ---
> 0-DAY kernel test infrastructureOpen Source Technology Center
> https://lists.01.org/pipermail/kbuild-all   Intel Corporation
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel