[Xen-devel] [qemu-mainline test] 128340: tolerable FAIL - PUSHED

2018-10-03 Thread osstest service owner
flight 128340 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128340/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 128324
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 128324
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 128324
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 128324
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 128324
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 qemuudafd95053611aa14dda40266857608d12ddce658
baseline version:
 qemuu3892f1f1a963e59dfe012cd9d461d33b2986fa3b

Last test of basis   128324  2018-10-02 19:37:21 Z1 days
Testing same since   128340  2018-10-03 10:07:15 Z0 days1 attempts


People who touched revisions under test:
  Alex Bennée 
  Chai Wen 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Emilio G. Cota 
  Fam Zheng 
  Geert Uytterhoeven 
  Guenter Roeck 
  Hikaru Nishida 
  Igor Mammedov 
  Jan Kiszka 
  John Snow 
  Li Qiang 
  Li Zhijian 
  Liran Alon 
  Marc-André Lureau 
  Mark Cave-Ayland 
  Paolo Bonzini 
  Pavel Dovgalyuk 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Rich Felker 
  Rob Landley 
  Thomas Huth 
  Ulrich Hecht 
  Viktor Prutyanov 
  Viktor Prutyanov 
  Yongji Xie 
  Yongji Xie 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Michal Hocko
On Wed 03-10-18 19:00:29, David Hildenbrand wrote:
[...]
> Let me rephrase: You state that user space has to make the decision and
> that user should be able to set/reconfigure rules. That is perfectly fine.
> 
> But then we should give user space access to sufficient information to
> make a decision. This might be the type of memory as we learned (what
> some part of this patch proposes), but maybe later more, e.g. to which
> physical device memory belongs (e.g. to hotplug it all movable or all
> normal) ...

I am pretty sure that user knows he/she wants to use ballooning in
HyperV or Xen, or that the memory hotplug should be used as a "RAS"
feature to allow add and remove DIMMs for reliability. Why shouldn't we
have a package to deploy an appropriate set of udev rules for each of
those usecases? I am pretty sure you need some other plumbing to enable
them anyway (e.g. RAS would require to have movable_node kernel
parameters, ballooning a kernel module etc.).

Really, one udev script to rule them all will simply never work.
-- 
Michal Hocko
SUSE Labs

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Ping: [PATCH] x86: improve vCPU selection in pagetable_dying()

2018-10-03 Thread Jan Beulich
>>> On 03.10.18 at 18:56,  wrote:
> On 09/26/2018 08:04 AM, Jan Beulich wrote:
> On 25.09.18 at 18:22,  wrote:
>>> On 18/09/18 13:44, Jan Beulich wrote:
>>> On 10.09.18 at 16:02,  wrote:
> Rather than unconditionally using vCPU 0, use the current vCPU if the
> subject domain is the current one.
>
> Signed-off-by: Jan Beulich 
>>>
>>> What improvement is this intended to bring?
>> 
>> I've come across this quite a while ago when investigating possibly
>> dangerous uses of d->vcpu[], well before your series to improve the
>> situation there. I generally consider it wrong to hard code use of
>> d->vcpu[0] whenever it can be avoided.
>> 
>>> Shadows are per-domain, and the gmfn in question is passed in by the
>>> caller.  AFACIT, it is a logical bug that that the callback takes a vcpu
>>> rather than a domain in the first place.
>> 
>> Did you look at the 3-level variant of sh_pagetable_dying()? It very
>> clearly reads the given vCPU's CR3.
> 
> Yes; and so the current implementation which unconditionally passes vcpu
> 0 is clearly a bug.
> 
>> Looking at things again (in particular
>> the comment ahead of pagetable_dying()) I now actually wonder why
>> HVMOP_pagetable_dying is permitted to be called by other than a domain
>> for itself. There's no use of it in the tool stack. Disallowing the unused
>> case would mean the fast-path logic in sh_pagetable_dying() could
>> become the only valid/implemented case. Tim?
> 
> Not so -- a guest could still call pagetable_dying() on the top level PT
> of a process not currently running.

Oh, you're right of course.

> I would be totally in favor of limiting this call to the guest itself,
> however -- that would simplify the logic even more.

Will do in v2 then.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Michal Hocko
On Wed 03-10-18 19:14:05, David Hildenbrand wrote:
> On 03/10/2018 16:34, Vitaly Kuznetsov wrote:
> > Dave Hansen  writes:
> > 
> >> On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
> >>> It is more than just memmaps (e.g. forking udev process doing memory
> >>> onlining also needs memory) but yes, the main idea is to make the
> >>> onlining synchronous with hotplug.
> >>
> >> That's a good theoretical concern.
> >>
> >> But, is it a problem we need to solve in practice?
> > 
> > Yes, unfortunately. It was previously discovered that when we try to
> > hotplug tons of memory to a low memory system (this is a common scenario
> > with VMs) we end up with OOM because for all new memory blocks we need
> > to allocate page tables, struct pages, ... and we need memory to do
> > that. The userspace program doing memory onlining also needs memory to
> > run and in case it prefers to fork to handle hundreds of notfifications
> > ... well, it may get OOMkilled before it manages to online anything.
> > 
> > Allocating all kernel objects from the newly hotplugged blocks would
> > definitely help to manage the situation but as I said this won't solve
> > the 'forking udev' problem completely (it will likely remain in
> > 'extreme' cases only. We can probably work around it by onlining with a
> > dedicated process which doesn't do memory allocation).
> > 
> 
> I guess the problem is even worse. We always have two phases
> 
> 1. add memory - requires memory allocation
> 2. online memory - might require memory allocations e.g. for slab/slub
> 
> So if we just added memory but don't have sufficient memory to start a
> user space process to trigger onlining, then we most likely also don't
> have sufficient memory to online the memory right away (in some scenarios).
> 
> We would have to allocate all new memory for 1 and 2 from the memory to
> be onlined. I guess the latter part is less trivial.
> 
> So while onlining the memory from the kernel might make things a little
> more robust, we would still have the chance for OOM / onlining failing.

Yes, _theoretically_. Is this a practical problem for reasonable
configurations though? I mean, this will never be perfect and we simply
cannot support all possible configurations. We should focus on
reasonable subset of them. From my practical experience the vast
majority of memory is consumed by memmaps (roughly 1.5%). That is not a
lot but I agree that allocating that from the zone normal and off node
is not great. Especially the second part which is noticeable for whole
node hotplug.

I have a feeling that arguing about fork not able to proceed or OOMing
for the memory hotplug is a bit of a stretch and a sign a of
misconfiguration.
-- 
Michal Hocko
SUSE Labs

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-linus test] 128334: regressions - FAIL

2018-10-03 Thread osstest service owner
flight 128334 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128334/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. 
vs. 125898
 test-amd64-i386-freebsd10-i386 11 guest-startfail REGR. vs. 125898
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 125898
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 125898
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 125898
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 125898
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install fail REGR. vs. 125898
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 125898
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 125898
 test-amd64-amd64-xl-qemuu-win7-amd64  7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 125898
 test-amd64-amd64-xl-pvhv2-intel  7 xen-boot  fail REGR. vs. 125898
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-libvirt-vhd  7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-libvirt-xsm  7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-xl-shadow 7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 125898
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 125898
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 125898
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 125898
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 125898
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 125898
 test-amd64-amd64-examine  8 reboot   fail REGR. vs. 125898
 test-amd64-i386-freebsd10-amd64 11 guest-start   fail REGR. vs. 125898
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 125898
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 125898
 test-amd64-amd64-libvirt-pair 10 xen-boot/src_host   fail REGR. vs. 125898
 test-amd64-amd64-libvirt-pair 11 xen-boot/dst_host   fail REGR. vs. 125898
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 125898
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 125898
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
125898
 test-amd64-i386-xl-qemut-win10-i386  7 xen-boot  fail REGR. vs. 125898

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  7 xen-boot fail REGR. vs. 125898

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-credit1   7 xen-bootfail baseline untested
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 125898
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 125898
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 125898
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 125898
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 125898
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 125898
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-su

[Xen-devel] [ovmf baseline-only test] 75347: trouble: blocked/broken

2018-10-03 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 75347 ovmf real [real]
http://osstest.xensource.com/osstest/logs/75347/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm  broken
 build-i386   broken
 build-amd64-pvopsbroken
 build-i386-xsm   broken
 build-amd64  broken
 build-i386-pvops broken

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 build-amd64   4 host-install(4)   broken baseline untested
 build-amd64-xsm   4 host-install(4)   broken baseline untested
 build-i386-pvops  4 host-install(4)   broken baseline untested
 build-i3864 host-install(4)   broken baseline untested
 build-amd64-pvops 4 host-install(4)   broken baseline untested
 build-i386-xsm4 host-install(4)   broken baseline untested

version targeted for testing:
 ovmf c0b1f749ef1304810ed4ea58ded65b7f41d79d3e
baseline version:
 ovmf c526dcd40f3a0f3a091684481f9c85f03f6a70a7

Last test of basis75327  2018-09-30 15:52:12 Z3 days
Testing same since75347  2018-10-04 00:50:33 Z0 days1 attempts


People who touched revisions under test:
  Jim Dailey 
  jim.dai...@dell.com 

jobs:
 build-amd64-xsm  broken  
 build-i386-xsm   broken  
 build-amd64  broken  
 build-i386   broken  
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopsbroken  
 build-i386-pvops broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xensource.com/osstest/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary

broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-amd64-pvops broken
broken-job build-i386-xsm broken
broken-job build-amd64 broken
broken-job build-i386-pvops broken
broken-step build-amd64 host-install(4)
broken-step build-amd64-xsm host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-i386 host-install(4)
broken-step build-amd64-pvops host-install(4)
broken-step build-i386-xsm host-install(4)

Push not applicable.


commit c0b1f749ef1304810ed4ea58ded65b7f41d79d3e
Author: jim.dai...@dell.com 
Date:   Wed Oct 3 09:02:24 2018 -0700

ShellPkg: Create a homefilesystem environment variable

Create a homefilesystem environment variable whose value is the file
system on which the executing shell is located. For example: "FS14:".

This eliminates the need for people to have to try and find the "boot"
file system in their startup script.  After this change they can simply
execute %homefilesystem% to set the cwd to the root of the file system
where the shell is located.

A future enhancement could be to add "homefilesystem" to the list of
predefined, read-only variables listed in the EfiShellSetEnv function of
file ShellProtocol.c

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jim Dailey 
Reviewed-by: Jaben Carsey 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Question, How to share interrupt between Doms

2018-10-03 Thread Peng Fan


> -Original Message-
> From: Julien Grall [mailto:julien.gr...@arm.com]
> Sent: 2018年10月3日 0:03
> To: Peng Fan ; Stefano Stabellini 
> Cc: xen-devel@lists.xenproject.org; Andre Przywara
> 
> Subject: Re: Question, How to share interrupt between Doms
> 
> On 02/10/2018 09:32, Peng Fan wrote:
> > Hi Julien, Stefano,
> 
> Hi Peng,
> 
> >
> > Do you have any suggestions on how to share one interrupt between Doms?
> 
> Sharing interrupts are usually a pain. You would need to forward the 
> interrupts
> to all the domains using that interrupt and wait for them to EOI. This has
> security implications because you don't want DomA to prevent DomB receiving
> another interrupt because the previous one has not been EOIed correctly.
> 
> > The issue is that a gpio controller has 32 in/out port, however it only has 
> > one
> binded interrupt. The interrupt handler needs to check the status bits to 
> check
> which port has interrupt coming.
> > In my case, there are different devices using gpio interrupt that needs to 
> > be
> assigned to different doms.
> 
>  From what you wrote, it looks like you expect the GPIO controller to be 
> shared
> with multiple domains.
> 
> I don't think it is safe to do that. You need one domain (or Xen) to fully 
> manage
> the controller. All the other domain will have to access either a virtual GPIO
> controller or PV one. In the former, interrupt would be virtual, while the 
> latter
> the interrupt would be through even channel.
> 
> So sharing interrupt should not be necessary. Did I miss anything?

When interrupts comes, the dom0 will handle that. Then forward the interrupt to 
domu.
But I did not find a good method to forward the interrupt and hook the 
interrupt in domu dts and domu driver.

In Domu, driver needs use request irq and the dts needs interrupt=. But  
when dom0 notify
remote, there is no hook in frontend driver and the other driver interrupt 
handler.

Thanks,
Peng.

> 
> Cheers,
> 
> --
> Julien Grall
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-linus bisection] complete test-amd64-amd64-examine

2018-10-03 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-examine
testid reboot

Tree: freebsd git://github.com/freebsd/freebsd.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  385afbf8c3e8bdf13fc729e8b2c172d1208d97f9
  Bug not present: 43b6b6eca863cf2f83dc062484963377c66a72be
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/128361/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-amd64-examine.reboot.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-amd64-examine.reboot
 --summary-out=tmp/128361.bisection-summary --basis-template=125898 
--blessings=real,real-bisect linux-linus test-amd64-amd64-examine reboot
Searching for failure / basis pass:
 128312 fail [host=chardonnay0] / 127148 [host=italia0] 127108 [host=elbling0] 
127038 [host=baroque0] 126978 [host=godello1] 126888 [host=albana1] 126682 
[host=elbling1] 126550 [host=godello0] 126412 [host=fiano0] 126310 
[host=joubertin0] 126202 [host=debina0] 126069 [host=debina1] 125921 
[host=godello1] 125898 [host=baroque1] 125702 [host=huxelrebe1] 125676 
[host=albana1] 125657 [host=chardonnay1] 125648 [host=godello0] 125639 
[host=joubertin1] 125585 [host=baroque1] 125551 [host=albana0] 125520 
[host=godello1] 125501 [host=fiano1] 125401 [host=rimava1] 125285 
[host=italia0] 125242 [host=baroque0] 125167 [host=godello0] 125129 
[host=chardonnay1] 125069 [host=albana1] 125041 ok.
Failure / basis pass flights: 128312 / 125041
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: freebsd git://github.com/freebsd/freebsd.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 04d432fdc0c15f2da76dac4a9a5caf1aeb051ef0 
385afbf8c3e8bdf13fc729e8b2c172d1208d97f9 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
9c0eed618f37dd5b4a57c8b3fbc48ef8913e3149 
de5b678ca4dcdfa83e322491d478d66df56c1986 
940185b2f6f343251c2b83bd96e599398cea51ec
Basis pass ff20311f27958b751ed21c94a01ed31c8d787f0b 
43b6b6eca863cf2f83dc062484963377c66a72be 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
43139135a8938de44f66333831d3a8655d07663a 
b4ac4bc410222d221dc46a74ac71efaa7b32d57c
Generating revisions with ./adhoc-revtuple-generator  
git://github.com/freebsd/freebsd.git#ff20311f27958b751ed21c94a01ed31c8d787f0b-04d432fdc0c15f2da76dac4a9a5caf1aeb051ef0
 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#43b6b6eca863cf2f83dc062484963377c66a72be-385afbf8c3e8bdf13fc729e8b2c172d1208d97f9
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#c8ea0457495342c417c3dc033bba25148b279f60-9c0eed618f37dd5b4a57c8b3fbc48ef8913e3149
 
git://xenbits.xen.org/qemu-xen.git#43139135a8938de44f66333831d3a8655d07663a-de5b678ca4dcdfa83e322491d478d66df56c1986
 
git://xenbits.xen.org/xen.git#b4ac4bc410222d221dc46a74ac71efaa7b32d57c-940185b2f6f343251c2b83bd96e599398cea51ec
adhoc-revtuple-generator: tree discontiguous: freebsd
From 
git://cache:9419/git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
   95773dc08627..cec4de302c5f  master -> origin/master
adhoc-revtuple-generator: tree discontiguous: linux-2.6
adhoc-revtuple-generator: tree discontiguous: qemu-xen
Loaded 2007 nodes in revision graph
Searching for test results:
 124938 [host=albana0]
 124994 [host=godello1]
 125001 [host=albana0]
 125004 [host=albana0]
 125006 [host=albana0]
 125041 pass ff20311f27958b751ed21c94a01ed31c8d787f0b 
43b6b6eca863cf2f83dc062484963377c66a72be 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
43139135a8938de44f66333831d3a8655d07663a 
b4ac4bc410222d221dc46a74ac71efaa7b32d57c
 125069 [host=albana1]
 125167 [host=godello0]
 125129 [host=chardonnay1]
 125242 [host=baroque0]
 125285 [host=italia0]
 125401 [host=rimava1]
 125501 [host=fiano1]
 125551 [host=albana0]
 125520 [host=godello1]
 125585 [host=baroque1]
 125648 [host=godello0]
 125639 [host=joubertin1]
 125657 [host=chardonnay1]
 125676 [host=albana1]
 125702 [host=huxelrebe1]

[Xen-devel] [ovmf test] 128351: all pass - PUSHED

2018-10-03 Thread osstest service owner
flight 128351 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128351/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf c0b1f749ef1304810ed4ea58ded65b7f41d79d3e
baseline version:
 ovmf c526dcd40f3a0f3a091684481f9c85f03f6a70a7

Last test of basis   128255  2018-09-30 11:40:39 Z3 days
Testing same since   128351  2018-10-03 18:40:50 Z0 days1 attempts


People who touched revisions under test:
  Jim Dailey 
  jim.dai...@dell.com 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c526dcd40f..c0b1f749ef  c0b1f749ef1304810ed4ea58ded65b7f41d79d3e -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 09/25] xen/arm: introduce bootcmdlines

2018-10-03 Thread Stefano Stabellini
On Wed, 1 Aug 2018, Julien Grall wrote:
> Hi Stefano,
> 
> On 01/08/18 00:27, Stefano Stabellini wrote:
> > Introduce a new array to store the cmdline of each boot module. It is
> > separate from struct bootmodules. Remove the cmdline field from struct
> > boot_module. This way, kernels and initrds with the same address in
> > memory can share struct bootmodule (important because we want them to be
> > free'd only once), but they can still have their separate bootcmdline
> > entries.
> > 
> > Add a dt_name field to struct bootcmdline to make it easier to find the
> > correct entry. Store the name of the "xen,domain" compatible node (for
> > example "Dom1"). This is a better choice compared to the name of the
> > "multiboot,kernel" compatible node, because their names are not unique.
> > For instance there can be more than one "module@0x4c00" in the
> > system, but there can only be one "/chosen/Dom1".
> 
> As I mentioned in the previous version, the code is currently looking for
> multiboot,module everywhere in the DT rather than only in /chosen. So your
> name could not be uniq.
> 
> However, this is not compliant with the protocol. Therefore you need to fix
> the code first to ensure the name will be uniq.

I'll fix this and everything else you pointed out


> > 
> > Add a pointer to struct kernel_info to point to the cmdline for a given
> > kernel.
> > 
> > Signed-off-by: Stefano Stabellini 
> > 
> > ---
> > 
> > Changes in v3:
> > - introduce bootcmdlines
> > - do not modify boot_fdt_cmdline
> > - add comments
> 
> I see no comments in the code. Did I miss anything?
> 
> > 
> > Changes in v2:
> > - new patch
> > ---
> >   xen/arch/arm/bootfdt.c  | 66
> > +++--
> >   xen/arch/arm/domain_build.c |  8 +++---
> >   xen/arch/arm/kernel.h   |  1 +
> >   xen/arch/arm/setup.c| 23 +++-
> >   xen/include/asm-arm/setup.h | 16 +--
> >   5 files changed, 82 insertions(+), 32 deletions(-)
> > 
> > diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> > index 8eba42c..6f44022 100644
> > --- a/xen/arch/arm/bootfdt.c
> > +++ b/xen/arch/arm/bootfdt.c
> > @@ -163,6 +163,38 @@ static void __init process_memory_node(const void *fdt,
> > int node,
> >   }
> >   }
> >   +static void __init add_boot_cmdline(const void *fdt, int node,
> > +const char *name, bootmodule_kind kind)
> > +{
> > +struct bootcmdlines *mods = &bootinfo.cmdlines;
> > +struct bootcmdline *mod;
> 
> This feels sligthtly strange to use "mod" here. We are not dealing with boot
> modules but boot command line.

I'l rename it


> > +const struct fdt_property *prop;
> > +int len;
> > +const char *cmdline;
> > +
> > +if ( mods->nr_mods == MAX_MODULES )
> > +{
> > +printk("Ignoring %s boot module (too many)\n", name);
> 
> Same here. This needs to be updated.

I'll reword it


> > +return;
> > +}
> > +
> > +mod = &mods->cmdline[mods->nr_mods++];
> > +mod->kind = kind;
> > +
> > +if ( strlen(name) > DT_MAX_NAME )
> > +panic("module %s name too long\n", name);
> 
> This would really never happen. It feels an ASSERT(strlen(name) >
> DT_MAX_NAME)) would be more suitable.

OK, easy to change


> > +safe_strcpy(mod->dt_name, name);
> > +
> > +prop = fdt_get_property(fdt, node, "bootargs", &len);
> > +if ( prop )
> > +{
> > +if ( len > BOOTMOD_MAX_CMDLINE )
> > +panic("module %s command line too long\n", name);
> > +cmdline = prop->data;
> > +safe_strcpy(mod->cmdline, cmdline);
> > +}
> > +}
> > +
> >   static void __init process_multiboot_node(const void *fdt, int node,
> > const char *name,
> > u32 address_cells, u32
> > size_cells)
> > @@ -172,8 +204,12 @@ static void __init process_multiboot_node(const void
> > *fdt, int node,
> >   const __be32 *cell;
> >   bootmodule_kind kind;
> >   paddr_t start, size;
> > -const char *cmdline;
> >   int len;
> > +int parent_node;
> > +
> > +parent_node = fdt_parent_offset(fdt, node);
> > +if ( parent_node < 0 )
> > +panic("node %s missing a parent\n", name);
> 
> It feels an ASSERT(parent_node < 0) would be more suitable as this should
> never really happen.

OK


> > prop = fdt_get_property(fdt, node, "reg", &len);
> >   if ( !prop )
> > @@ -220,17 +256,8 @@ static void __init process_multiboot_node(const void
> > *fdt, int node,
> >   kind = BOOTMOD_XSM;
> >   }
> > -prop = fdt_get_property(fdt, node, "bootargs", &len);
> > -if ( prop )
> > -{
> > -if ( len > BOOTMOD_MAX_CMDLINE )
> > -panic("module %s command line too long\n", name);
> > -cmdline = prop->data;
> > -}
> > -else
> > -cmdline = NULL;
> > -
> 
> I am not entirely sure to understand why this code has been

Re: [Xen-devel] [PATCH v3 22/25] xen/arm: Allow vpl011 to be used by DomU

2018-10-03 Thread Stefano Stabellini
On Wed, 22 Aug 2018, Julien Grall wrote:
> On 16/08/18 20:21, Stefano Stabellini wrote:
> > On Mon, 13 Aug 2018, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 01/08/18 00:28, Stefano Stabellini wrote:
> > > > Make vpl011 being able to be used without a userspace component in Dom0.
> > > > In that case, output is printed to the Xen serial and input is received
> > > > from the Xen serial one character at a time.
> > > > 
> > > > Call domain_vpl011_init during construct_domU if vpl011 is enabled.
> > > > 
> > > > Introduce a new ring struct with only the ring array to avoid a waste of
> > > > memory. Introduce separate read_data and write_data functions for
> > > > initial domains: vpl011_write_data_xen is very simple and just writes
> > > > to the console, while vpl011_read_data_xen is a duplicate of
> > > > vpl011_read_data. Although textually almost identical, we are forced to
> > > > duplicate the functions because the struct layout is different.
> > > > 
> > > > Output characters are printed one by one, potentially leading to
> > > > intermixed output of different domains on the console. A follow-up patch
> > > > will solve the issue by introducing buffering.
> > > > 
> > > > Signed-off-by: Stefano Stabellini 
> > > > ---
> > > > Changes in v3:
> > > > - add in-code comments
> > > > - improve existing comments
> > > > - remove ifdef around domain_vpl011_init in construct_domU
> > > > - add ASSERT
> > > > - use SBSA_UART_FIFO_SIZE for in buffer size
> > > > - rename ring_enable to backend_in_domain
> > > > - rename struct xencons_in to struct vpl011_xen_backend
> > > > - rename inring field to xen
> > > > - rename helper functions accordingly
> > > > - remove unnecessary stub implementation of vpl011_rx_char
> > > > - move vpl011_rx_char_xen within the file to avoid the need of a forward
> > > > declaration of vpl011_data_avail
> > > > - fix small bug in vpl011_rx_char_xen: increment in_prod before using it
> > > > to check xencons_queued.
> > > > 
> > > > Changes in v2:
> > > > - only init if vpl011
> > > > - rename vpl011_read_char to vpl011_rx_char
> > > > - remove spurious change
> > > > - fix coding style
> > > > - use different ring struct
> > > > - move the write_data changes to their own function
> > > > (vpl011_write_data_noring)
> > > > - duplicate vpl011_read_data
> > > > ---
> > > >xen/arch/arm/domain_build.c  |   9 +-
> > > >xen/arch/arm/vpl011.c| 198
> > > > ++-
> > > >xen/include/asm-arm/vpl011.h |   8 ++
> > > >3 files changed, 192 insertions(+), 23 deletions(-)
> > > > 
> > > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > > index f9fa484..0888a76 100644
> > > > --- a/xen/arch/arm/domain_build.c
> > > > +++ b/xen/arch/arm/domain_build.c
> > > > @@ -2638,7 +2638,14 @@ static int __init construct_domU(struct domain
> > > > *d,
> > > > struct dt_device_node *node)
> > > >if ( rc < 0 )
> > > >return rc;
> > > >-return __construct_domain(d, &kinfo);
> > > > +rc = __construct_domain(d, &kinfo);
> > > > +if ( rc < 0 )
> > > > +return rc;
> > > > +
> > > > +if ( kinfo.vpl011 )
> > > > +rc = domain_vpl011_init(d, NULL);
> > > > +
> > > > +return rc;
> > > >}
> > > >  void __init create_domUs(void)
> > > > diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> > > > index 725a203..f206c61 100644
> > > > --- a/xen/arch/arm/vpl011.c
> > > > +++ b/xen/arch/arm/vpl011.c
> > > > @@ -77,6 +77,91 @@ static void vpl011_update_interrupt_status(struct
> > > > domain
> > > > *d)
> > > >#endif
> > > >}
> > > >+/*
> > > > + * vpl011_write_data_xen writes chars from the vpl011 out buffer to the
> > > > + * console. Only to be used when the backend is Xen.
> > > > + */
> > > > +static void vpl011_write_data_xen(struct domain *d, uint8_t data)
> > > > +{
> > > > +unsigned long flags;
> > > > +struct vpl011 *vpl011 = &d->arch.vpl011;
> > > > +
> > > > +VPL011_LOCK(d, flags);
> > > > +
> > > > +printk("%c", data);
> > > > +if (data == '\n')
> > > > +printk("DOM%u: ", d->domain_id);
> > > 
> > > There are a problem in this code. The first line of a domain will always
> > > printed without "DOM%u: " in front. This means you don't really know where
> > > it
> > > is coming from until you get the second line.
> > 
> > This problem is solved by the follow-up patch that introduces characters
> > buffering. I'll mention it in the commit message.
> 
> To be honest, this should be solved in this patch and not the follow-up one.

I agree. I kept them separate to make them easier to review. Would you
be OK with me merging the two patches into one once they have both been
acked? Otherwise, if you prefer that I merge them now, let me know.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Ping: [PATCH] x86: improve vCPU selection in pagetable_dying()

2018-10-03 Thread Tim Deegan
At 17:56 +0100 on 03 Oct (1538589366), George Dunlap wrote:
> On 09/26/2018 08:04 AM, Jan Beulich wrote:
> > Looking at things again (in particular
> > the comment ahead of pagetable_dying()) I now actually wonder why
> > HVMOP_pagetable_dying is permitted to be called by other than a domain
> > for itself. There's no use of it in the tool stack. Disallowing the unused
> > case would mean the fast-path logic in sh_pagetable_dying() could
> > become the only valid/implemented case. Tim?
> 
> Not so -- a guest could still call pagetable_dying() on the top level PT
> of a process not currently running.
> 
> I would be totally in favor of limiting this call to the guest itself,
> however -- that would simplify the logic even more.

Yes, I think that this can be restricted to the caller's domain, and
so always use current as the vcpu.  I don't recall a use case for
setting this from outside the VM.

I can't find reason for the vcpu[0] in the history, but it does look
wrong.  I suspect this patch might have been in a XenServer patch
queue for a while, and perhaps the plumbing was fixed up incorrectly
when it was upstreamed.

Cheers,

Tim.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 128333: regressions - FAIL

2018-10-03 Thread osstest service owner
flight 128333 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128333/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 15 guest-saverestore.2 fail 
REGR. vs. 128084
 test-amd64-amd64-xl-qemuu-ovmf-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
128084

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 128084
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 128084
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 128084
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 128084
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 128084
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 128084
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 128084
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 128084
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 128084
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  54ec59f6b0b363c34cf1864d5214a05e35ea75ee
baseline version:
 xen  940185b2f6f343251c2b83bd96e599398cea51ec

Last test of basis   128084  2018-09-26 01:51:53 Z7 days
Failing since128118  2018-09-27 00:37:03 Z6 days7 attempts
Testing same since   128333  2018-10-03 05:16:13 Z0 days1 attempts


People who 

Re: [Xen-devel] [PATCH v4 06/12] x86/genapic: patch indirect calls to direct ones

2018-10-03 Thread Andrew Cooper
On 02/10/18 11:14, Jan Beulich wrote:
> For (I hope) obvious reasons only the ones used at runtime get
> converted.
>
> Signed-off-by: Jan Beulich 
> Reviewed-by: Wei Liu 

Acked-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 05/12] x86/genapic: remove indirection from genapic hook accesses

2018-10-03 Thread Andrew Cooper
On 02/10/18 11:14, Jan Beulich wrote:
> Instead of loading a pointer at each use site, have a single runtime
> instance of struct genapic, copying into it from the individual
> instances. The individual instances can this way also be moved to .init
> (also adjust apic_probe[] at this occasion).
>
> Signed-off-by: Jan Beulich 
> Reviewed-by: Wei Liu 

Acked-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 04/12] x86: patch ctxt_switch_masking() indirect call to direct one

2018-10-03 Thread Andrew Cooper
On 02/10/18 11:13, Jan Beulich wrote:
> Signed-off-by: Jan Beulich 
> Reviewed-by: Wei Liu 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 03/12] x86/HVM: patch vINTR indirect calls through hvm_funcs to direct ones

2018-10-03 Thread Andrew Cooper
On 02/10/18 11:13, Jan Beulich wrote:
> @@ -1509,7 +1513,8 @@ static int lapic_load_regs(struct domain
>  lapic_load_fixup(s);
>  
>  if ( hvm_funcs.process_isr )
> -hvm_funcs.process_isr(vlapic_find_highest_isr(s), v);
> +alternative_vcall(hvm_funcs.process_isr,
> +   vlapic_find_highest_isr(s), v);

Alignment.

Other than this, Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 02/12] x86/HVM: patch indirect calls through hvm_funcs to direct ones

2018-10-03 Thread Andrew Cooper
On 02/10/18 11:12, Jan Beulich wrote:
> This is intentionally not touching hooks used rarely (or not at all)
> during the lifetime of a VM, like {domain,vcpu}_initialise or cpu_up,
> as well as nested, VM event, and altp2m ones (they can all be done
> later, if so desired). Virtual Interrupt delivery ones will be dealt
> with in a subsequent patch.
>
> Signed-off-by: Jan Beulich 
> Reviewed-by: Wei Liu 

Acked-by: Andrew Cooper 

It is a shame that we don't have a variation such as cond_alt_vcall()
which nops out the entire call when the function pointer is NULL, but I
can't think of any sane way of trying to make that happen.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 01/12] x86: infrastructure to allow converting certain indirect calls to direct ones

2018-10-03 Thread Andrew Cooper
On 02/10/18 11:12, Jan Beulich wrote:
> In a number of cases the targets of indirect calls get determined once
> at boot time. In such cases we can replace those calls with direct ones
> via our alternative instruction patching mechanism.
>
> Some of the targets (in particular the hvm_funcs ones) get established
> only in pre-SMP initcalls, making necessary a second passs through the
> alternative patching code. Therefore some adjustments beyond the
> recognition of the new special pattern are necessary there.
>
> Note that patching such sites more than once is not supported (and the
> supplied macros also don't provide any means to do so).
>
> Signed-off-by: Jan Beulich 

Reviewing just the code generation at this point.

See the Linux source code for ASM_CALL_CONSTRAINT.  There is a potential
code generation issue if you've got a call instruction inside an asm
block if you don't list the stack pointer as a clobbered output.

Next, with Clang, there seems to be some a bug causing the function
pointer to be spilled onto the stack

82d08026e990 :
82d08026e990:   50  push   %rax
82d08026e991:   48 8b 05 40 bc 20 00mov0x20bc40(%rip),%rax  
  # 82d08047a5d8 
82d08026e998:   48 89 04 24 mov%rax,(%rsp)
82d08026e99c:   ff 15 36 bc 20 00   callq  *0x20bc36(%rip)# 
82d08047a5d8 
82d08026e9a2:   31 c0   xor%eax,%eax
82d08026e9a4:   59  pop%rcx
82d08026e9a5:   c3  retq   
82d08026e9a6:   66 2e 0f 1f 84 00 00nopw   %cs:0x0(%rax,%rax,1)
82d08026e9ad:   00 00 00 


I'm not quite sure what is going on here, and the binary does boot, but
the code gen is definitely not correct.  Given this and the GCC bugs
you've found leading to the NO_ARG infrastructure, how about dropping
all the compatibility hacks, and making the infrastructure fall back to
a regular compiler-inserted function pointer call?

I think it is entirely reasonable to require people wanting to use this
optimised infrastructure to be using new-enough compilers, and it would
avoid the need to carry compatibility hacks for broken compilers.

Next, the ASM'd calls aren't SYSV-ABI compliant.

extern void bar(void);

int foo1(void)
{
hvm_funcs.wbinvd_intercept();
return 0;
}

int foo2(void)
{
alternative_vcall(hvm_funcs.wbinvd_intercept);
return 0;
}

int bar1(void)
{
bar();
return 0;
}

82d08026e1e0 :
82d08026e1e0:   48 83 ec 08 sub$0x8,%rsp
82d08026e1e4:   48 8b 05 c5 49 1d 00mov0x1d49c5(%rip),%rax  
  # 82d080442bb0 
82d08026e1eb:   e8 30 2d 0f 00  callq  82d080360f20 
<__x86_indirect_thunk_rax>
82d08026e1f0:   31 c0   xor%eax,%eax
82d08026e1f2:   48 83 c4 08 add$0x8,%rsp
82d08026e1f6:   c3  retq   
82d08026e1f7:   66 0f 1f 84 00 00 00nopw   0x0(%rax,%rax,1)
82d08026e1fe:   00 00 

82d08026e200 :
82d08026e200:   ff 15 aa 49 1d 00   callq  *0x1d49aa(%rip)# 
82d080442bb0 
82d08026e206:   31 c0   xor%eax,%eax
82d08026e208:   c3  retq   
82d08026e209:   0f 1f 80 00 00 00 00nopl   0x0(%rax)

82d08026e210 :
82d08026e210:   48 83 ec 08 sub$0x8,%rsp
82d08026e214:   e8 17 18 01 00  callq  82d08027fa30 
82d08026e219:   31 c0   xor%eax,%eax
82d08026e21b:   48 83 c4 08 add$0x8,%rsp
82d08026e21f:   c3  retq   

foo2 which uses alternative_vcall() should be subtracting 8 from the
stack pointer before the emitted call instruction.  I can't find any set
of constraints which causes the stack to be set up correctly.

Finally, this series doesn't link with the default Debian toolchain.

andrewcoop@andrewcoop:/local/xen.git/xen$ ld --version
GNU ld (GNU Binutils for Debian) 2.25

andrewcoop@andrewcoop:/local/xen.git/xen$ make -s build -j8 
XEN_TARGET_ARCH=x86_64 KCONFIG_CONFIG=.config-release
 __  ___  __  __ _  
 \ \/ /___ _ __   | || |  / |___ \_   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \  | || |_ | | __) |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|| |/ __/|__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_||_|(_)_|_|   \__,_|_| |_|___/\__\__,_|_.__/|_|\___|

prelink.o:(.debug_aranges+0x3c94): relocation truncated to fit: R_X86_64_32 
against `.debug_info'
prelink.o:(.debug_info+0x225fa): relocation truncated to fit: R_X86_64_32 
against `.debug_str'
prelink.o:(.debug_info+0x22b57): relocation truncated to fit: R_X86_64_32 
against `.debug_str'
prelink.o:(.debug_info+0x1b92d

[Xen-devel] [xen-unstable-smoke test] 128347: tolerable all pass - PUSHED

2018-10-03 Thread osstest service owner
flight 128347 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128347/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  359970fd8b781fac2ddcbc84dd5b890075fa08ef
baseline version:
 xen  54ec59f6b0b363c34cf1864d5214a05e35ea75ee

Last test of basis   128323  2018-10-02 19:07:13 Z0 days
Testing same since   128347  2018-10-03 16:00:46 Z0 days1 attempts


People who touched revisions under test:
  Julien Grall 
  Wei Liu 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   54ec59f6b0..359970fd8b  359970fd8b781fac2ddcbc84dd5b890075fa08ef -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 17/25] xen/arm: introduce allocate_memory

2018-10-03 Thread Stefano Stabellini
On Wed, 1 Aug 2018, Julien Grall wrote:
> Hi,
> 
> On 01/08/18 00:28, Stefano Stabellini wrote:
> > Introduce an allocate_memory function able to allocate memory for DomUs
> > and map it at the right guest addresses, according to the guest memory
> > map: GUEST_RAM0_BASE and GUEST_RAM1_BASE.
> > 
> > Signed-off-by: Stefano Stabellini 
> > ---
> > Changes in v3:
> > - new patch
> > ---
> >   xen/arch/arm/domain_build.c | 125
> > +++-
> >   1 file changed, 124 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index ab72c36..dfa74e4 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -369,6 +369,129 @@ static void __init allocate_memory_11(struct domain
> > *d,
> >   }
> >   }
> >   +static bool __init insert_bank(struct domain *d,
> > +   struct kernel_info *kinfo,
> > +   struct page_info *pg,
> > +   unsigned int order)
> > +{
> > +int res, i;
> > +mfn_t smfn;
> > +paddr_t gaddr, size;
> > +struct membank *bank;
> > +
> > +smfn = page_to_mfn(pg);
> 
> This could combine with the declaration above.

OK


> > +size = pfn_to_paddr(1UL << order);
> 
> Ditto.

OK


> > +
> > +/*
> > + * DomU memory is provided in two banks:
> > + *   GUEST_RAM0_BASE - GUEST_RAM0_BASE + GUEST_RAM0_SIZE
> > + *   GUEST_RAM1_BASE - GUEST_RAM1_BASE + GUEST_RAM1_SIZE
> > + *
> > + * Find the right gaddr address for DomUs accordingly.
> > + */
> > +gaddr = GUEST_RAM0_BASE;
> > +if ( kinfo->mem.nr_banks > 0 )
> > +{
> > +for( i = 0; i < kinfo->mem.nr_banks; i++ )
> > +{
> > +bank = &kinfo->mem.bank[i];
> > +gaddr = bank->start + bank->size;
> > +}
> > +if ( bank->start == GUEST_RAM0_BASE &&
> > + gaddr + size > (GUEST_RAM0_BASE + GUEST_RAM0_SIZE) )
> > +gaddr = GUEST_RAM1_BASE;
> > +if ( bank->start == GUEST_RAM1_BASE &&
> > + gaddr + size > (GUEST_RAM1_BASE + GUEST_RAM1_SIZE) )
> > +goto fail;
> > +}
> 
> I still really dislike this code. This is difficult to understand and not
> scalable. As I said in the previous version, it would be possible to have more
> than 2 banks in the future. This will either come with PCI PT or dynamic
> memory layout.
> 
> What should really be done is a function allocate_memory that take in
> parameter the range to allocate. E.g
> 
> allocate_bank_memory(struct domain *d, gfn_t sgfn, unsigned long order);
> 
> Then the function allocate_memory will compute the size of each bank based on
> mem_ and call allocate_bank_memory for each bank.

I'll make the change.


> > +
> > +dprintk(XENLOG_INFO,
> > +"Allocated %#"PRIpaddr"-%#"PRIpaddr":%#"PRIpaddr"-%#"PRIpaddr"
> > (%ldMB/%ldMB, order %d)\n",
> 
> It would be possible to request a guest with 16KB of memory. This would be
> printed as 0.

I'll printk KBs instead of MBs.


> > +mfn_to_maddr(smfn), mfn_to_maddr(smfn) + size,
> > +gaddr, gaddr + size,
> > +1UL << (order + PAGE_SHIFT - 20),
> > +/* Don't want format this as PRIpaddr (16 digit hex) */
> > +(unsigned long)(kinfo->unassigned_mem >> 20),
> > +order);
> > +
> > +res = guest_physmap_add_page(d, gaddr_to_gfn(gaddr), smfn, order);
> > +if ( res )
> > +{
> > +dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
> > +goto fail;
> > +}
> > +
> > +kinfo->unassigned_mem -= size;
> > +bank = &kinfo->mem.bank[kinfo->mem.nr_banks];
> > +
> > +bank->start = gaddr;
> > +bank->size = size;
> > +kinfo->mem.nr_banks++;
> > +return true;
> > +
> > +fail:
> > +free_domheap_pages(pg, order);
> > +return false;
> > +}
> > +
> > +static void __init allocate_memory(struct domain *d, struct kernel_info
> > *kinfo)
> > +{
> > +const unsigned int min_order = get_order_from_bytes(MB(4));
> 
> Why do you have this limitation for non-direct mapped domain? There are
> nothing wrong to allocate 2MB/4K pages for them.

I'll remove


> > +struct page_info *pg;
> > +unsigned int order = get_allocation_size(kinfo->unassigned_mem);
> > +int i;
> > +
> > +dprintk(XENLOG_INFO, "Allocating mappings totalling %ldMB for
> > dom%d:\n",
> 
> Ditto.

I'll print KBs


> > +/* Don't want format this as PRIpaddr (16 digit hex) */
> > +(unsigned long)(kinfo->unassigned_mem >> 20), d->domain_id);
> > +
> > +kinfo->mem.nr_banks = 0;
> > +
> > +order = get_allocation_size(kinfo->unassigned_mem);
> > +if ( order > GUEST_RAM0_SIZE )
> > +order = GUEST_RAM0_SIZE;
> 
> I don't understand this check. You are comparing a power of 2 with KB.

I'll fix


> > +while ( kinfo->unassigned_mem )
> > +{
> > +pg = alloc_domheap

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread David Hildenbrand
On 03/10/2018 16:34, Vitaly Kuznetsov wrote:
> Dave Hansen  writes:
> 
>> On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
>>> It is more than just memmaps (e.g. forking udev process doing memory
>>> onlining also needs memory) but yes, the main idea is to make the
>>> onlining synchronous with hotplug.
>>
>> That's a good theoretical concern.
>>
>> But, is it a problem we need to solve in practice?
> 
> Yes, unfortunately. It was previously discovered that when we try to
> hotplug tons of memory to a low memory system (this is a common scenario
> with VMs) we end up with OOM because for all new memory blocks we need
> to allocate page tables, struct pages, ... and we need memory to do
> that. The userspace program doing memory onlining also needs memory to
> run and in case it prefers to fork to handle hundreds of notfifications
> ... well, it may get OOMkilled before it manages to online anything.
> 
> Allocating all kernel objects from the newly hotplugged blocks would
> definitely help to manage the situation but as I said this won't solve
> the 'forking udev' problem completely (it will likely remain in
> 'extreme' cases only. We can probably work around it by onlining with a
> dedicated process which doesn't do memory allocation).
> 

I guess the problem is even worse. We always have two phases

1. add memory - requires memory allocation
2. online memory - might require memory allocations e.g. for slab/slub

So if we just added memory but don't have sufficient memory to start a
user space process to trigger onlining, then we most likely also don't
have sufficient memory to online the memory right away (in some scenarios).

We would have to allocate all new memory for 1 and 2 from the memory to
be onlined. I guess the latter part is less trivial.

So while onlining the memory from the kernel might make things a little
more robust, we would still have the chance for OOM / onlining failing.

-- 

Thanks,

David / dhildenb

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread David Hildenbrand
On 03/10/2018 16:24, Michal Hocko wrote:
> On Wed 03-10-18 15:52:24, Vitaly Kuznetsov wrote:
> [...]
>>> As David said some of the memory cannot be onlined without further steps
>>> (e.g. when it is standby as David called it) and then I fail to see how
>>> eBPF help in any way.
>>
>> and also, we can fight till the end of days here trying to come up with
>> an onlining solution which would work for everyone and eBPF would move
>> this decision to distro level.
> 
> The point is that there is _no_ general onlining solution. This is
> basically policy which belongs to the userspace.
> 

As already stated, I guess we should then provide user space with
sufficient information to make a good decision (to implement rules).

The eBPF is basically the same idea, only the rules are formulated
differently and directly handle in the kernel. Still it might be e.e.
relevant if memory is standby memory (that's what I remember the
official s390x name), or something else.

Right now, the (udev) rules we have make assumptions based on general
system properties (s390x, HyperV ...).

-- 

Thanks,

David / dhildenb

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread David Hildenbrand
On 03/10/2018 15:54, Michal Hocko wrote:
> On Tue 02-10-18 17:25:19, David Hildenbrand wrote:
>> On 02/10/2018 15:47, Michal Hocko wrote:
> [...]
>>> Zone imbalance is an inherent problem of the highmem zone. It is
>>> essentially the highmem zone we all loved so much back in 32b days.
>>> Yes the movable zone doesn't have any addressing limitations so it is a
>>> bit more relaxed but considering the hotplug scenarios I have seen so
>>> far people just want to have full NUMA nodes movable to allow replacing
>>> DIMMs. And then we are back to square one and the zone imbalance issue.
>>> You have those regardless where memmaps are allocated from.
>>
>> Unfortunately yes. And things get more complicated as you are adding a
>> whole DIMMs and get notifications in the granularity of memory blocks.
>> Usually you are not interested in onlining any memory block of that DIMM
>> as MOVABLE as soon as you would have to online one memory block of that
>> DIMM as NORMAL - because that can already block the whole DIMM.
> 
> For the purpose of the hotremove, yes. But as Dave has noted people are
> (ab)using zone movable for other purposes - e.g. large pages.

That might be right for some very special use cases. For most of users
this is not the case (meaning it should be the default but if the user
wants to change it, he should be allowed to change it).

>  
> [...]
>>> Then the immediate question would be why to use memory hotplug for that
>>> at all? Why don't you simply start with a huge pre-allocated physical
>>> address space and balloon memory in an out per demand. Why do you want
>>> to inject new memory during the runtime?
>>
>> Let's assume you have a guest with 20GB size and eventually want to
>> allow to grow it to 4TB. You would have to allocate metadata for 4TB
>> right from the beginning. That's definitely now what we want. That is
>> why memory hotplug is used by e.g. XEN or Hyper-V. With Hyper-V, the
>> hypervisor even tells you at which places additional memory has been
>> made available.
> 
> Then you have to live with the fact that your hot added memory will be
> self hosted and find a way for ballooning to work with that. The price
> would be that some part of the memory is not really balloonable in the
> end.
> 
 1. is a reason why distributions usually don't configure
 "MEMORY_HOTPLUG_DEFAULT_ONLINE", because you really want the option for
 MOVABLE zone. That however implies, that e.g. for x86, you have to
 handle all new memory in user space, especially also HyperV memory.
 There, you then have to check for things like "isHyperV()" to decide
 "oh, yes, this should definitely not go to the MOVABLE zone".
>>>
>>> Why do you need a generic hotplug rule in the first place? Why don't you
>>> simply provide different set of rules for different usecases? Let users
>>> decide which usecase they prefer rather than try to be clever which
>>> almost always hits weird corner cases.
>>>
>>
>> Memory hotplug has to work as reliable as we can out of the box. Letting
>> the user make simple decisions like "oh, I am on hyper-V, I want to
>> online memory to the normal zone" does not feel right.
> 
> Users usually know what is their usecase and then it is just a matter of
> plumbing (e.g. distribution can provide proper tools to deploy those
> usecases) to chose the right and for user obscure way to make it work.

I disagree. If we can ship sane defaults, we should do that and allow to
make changes later on. This is how distributions have been working for
ever. But yes, allowing to make modifications is always a good idea to
tailor it to some special case user scenarios. (tuned or whatever we
have in place).

> 
>> But yes, we
>> should definitely allow to make modifications. So some sane default rule
>> + possible modification is usually a good idea.
>>
>> I think Dave has a point with using MOVABLE for huge page use cases. And
>> there might be other corner cases as you correctly state.
>>
>> I wonder if this patch itself minus modifying online/offline might make
>> sense. We can then implement simple rules in user space
>>
>> if (normal) {
>>  /* customers expect hotplugged DIMMs to be unpluggable */
>>  online_movable();
>> } else if (paravirt) {
>>  /* paravirt memory should as default always go to the NORMAL */
>>  online();
>> } else {
>>  /* standby memory will never get onlined automatically */
>> }
>>
>> Compared to having to guess what is to be done (isKVM(), isHyperV,
>> isS390 ...) and failing once this is no longer unique (e.g. virtio-mem
>> and ACPI support for x86 KVM).
> 
> I am worried that exporing a type will just push us even further to the
> corner. The current design is really simple and 2 stage and that is good
> because it allows for very different usecases. The more specific the API
> be the more likely we are going to hit "I haven't even dreamed somebody
> would be using hotplug for this thing". And I would bet this will happen
> sooner or later

Re: [Xen-devel] Ping: [PATCH] x86: improve vCPU selection in pagetable_dying()

2018-10-03 Thread George Dunlap
On 09/26/2018 08:04 AM, Jan Beulich wrote:
 On 25.09.18 at 18:22,  wrote:
>> On 18/09/18 13:44, Jan Beulich wrote:
>> On 10.09.18 at 16:02,  wrote:
 Rather than unconditionally using vCPU 0, use the current vCPU if the
 subject domain is the current one.

 Signed-off-by: Jan Beulich 
>>
>> What improvement is this intended to bring?
> 
> I've come across this quite a while ago when investigating possibly
> dangerous uses of d->vcpu[], well before your series to improve the
> situation there. I generally consider it wrong to hard code use of
> d->vcpu[0] whenever it can be avoided.
> 
>> Shadows are per-domain, and the gmfn in question is passed in by the
>> caller.  AFACIT, it is a logical bug that that the callback takes a vcpu
>> rather than a domain in the first place.
> 
> Did you look at the 3-level variant of sh_pagetable_dying()? It very
> clearly reads the given vCPU's CR3.

Yes; and so the current implementation which unconditionally passes vcpu
0 is clearly a bug.

> Looking at things again (in particular
> the comment ahead of pagetable_dying()) I now actually wonder why
> HVMOP_pagetable_dying is permitted to be called by other than a domain
> for itself. There's no use of it in the tool stack. Disallowing the unused
> case would mean the fast-path logic in sh_pagetable_dying() could
> become the only valid/implemented case. Tim?

Not so -- a guest could still call pagetable_dying() on the top level PT
of a process not currently running.

I would be totally in favor of limiting this call to the guest itself,
however -- that would simplify the logic even more.

  -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 04/25] xen/arm: document dom0less

2018-10-03 Thread Stefano Stabellini
On Wed, 1 Aug 2018, Julien Grall wrote:
> Hi Stefano,
> 
> On 01/08/18 00:27, Stefano Stabellini wrote:
> > Add a new document to provide information on how to use dom0less related
> > features and their current limitations.
> > 
> > Signed-off-by: Stefano Stabellini 
> > 
> > ---
> > Changes in v3:
> > - add patch
> > ---
> >   docs/misc/arm/dom0less | 47
> > +++
> >   1 file changed, 47 insertions(+)
> >   create mode 100644 docs/misc/arm/dom0less
> > 
> > diff --git a/docs/misc/arm/dom0less b/docs/misc/arm/dom0less
> 
> This should be suffixed with .txt. You also want to add a line in docs/INDEX
> describing the file.

I'll make this changes and all the others suggested in this email


> > new file mode 100644
> > index 000..ae5a8b1
> > --- /dev/null
> > +++ b/docs/misc/arm/dom0less
> > @@ -0,0 +1,47 @@
> > +Dom0less
> > +
> > +
> > +"Dom0less" is a set of Xen features that enable the deployment of a Xen
> > +system without Dom0.
> 
> I think this sentence is misleading. You still deploy Xen with Dom0.
> 
> Also, we have been trying to removing the wording Dom0 anywhere in the code.
> Instead, we are now using "Hardware Domain". I would rather avoid to use Dom0
> in the documentation as it could be misleading, you will always have a domain
> with ID (it may not be what you call Dom0 here).
> 
> > Each feature can be used independently from the
> > +others, unless otherwise stated.
> > +
> > +Booting Multiple Domains from Device Tree
> > +=
> > +
> > +This feature enables Xen to create a set of DomUs alongside Dom0 at boot
> > +time. Information about the DomUs to be created by Xen is passed to the
> > +hypervisor via Device Tree. Specifically, the existing Device Tree based
> > +Multiboot specification has been extended to allow for multiple domains
> > +to be passed to Xen. See docs/misc/arm/device-tree/booting.txt for more
> > +information about the Multiboot specification and how to use it.
> > +
> > +Instead of waiting for Dom0 to be fully booted and the Xen tools to
> > +become available, domains created by Xen this way are started in
> > +parallel to Dom0. Hence, their boot time is typically much shorter.
> > +
> > +Domains started by Xen at boot time currently have the following
> > +limitations:
> > +
> > +- they cannot be properly shutdown or rebooted using xl
> > +If one of them crashes, the whole platform should be rebooted.
> > +
> > +- some xl operations might not work as expected
> > +xl is meant to be used with domains that have been created by it. Using
> > +xl with domains started by Xen at boot might not work as expected.
> > +
> > +- the GIC version is the native version
> > +In absence of other information, the GIC version exposed to the domains
> > +started by Xen at boot is the same as the native GIC version.
> > +
> > +- no PV drivers
> > +There is no support for PV devices at the moment. All devices need to be
> > +statically assigned to guests.
> > +
> > +- vCPU pinning
> > +Pinning vCPUs of domains started by Xen at boot can be done from dom0,
> > +using `xl vcpu-pin' as usual. It is not currently possible to configure
> > +vCPU pinning for domains other than dom0 without dom0. However, the NULL
> > +scheduler (currently unsupported) can be selected by passing
> 
> I would rather not mention NULL scheduler is unsupported here. That's another
> place to update the doc when it gets supported and maybe be missed.
> 
> > +`sched=null' to the Xen command line. The NULL scheduler automatically
> > +assignes and pins vCPUs to pCPUs, but the vCPU-pCPU assignments cannot
> 
> s/assignes/assigns/
> 
> > +be configured.
> 
> Cheers,
> 
> -- 
> Julien Grall
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI flr/slot/bus reset with 'reset' SysFS attribute

2018-10-03 Thread Pasi Kärkkäinen
On Wed, Sep 19, 2018 at 11:05:26AM +0200, Roger Pau Monné wrote:
> On Tue, Sep 18, 2018 at 02:09:53PM -0400, Boris Ostrovsky wrote:
> > On 9/18/18 5:32 AM, George Dunlap wrote:
> > >
> > >> On Sep 18, 2018, at 8:15 AM, Pasi Kärkkäinen  wrote:
> > >>
> > >> Hi,
> > >>
> > >> On Mon, Sep 17, 2018 at 02:06:02PM -0400, Boris Ostrovsky wrote:
> > >>> What about the toolstack changes? Have they been accepted? I vaguely
> > >>> recall there was a discussion about those changes but don't remember how
> > >>> it ended.
> > >>>
> > >> I don't think toolstack/libxl patch has been applied yet either.
> > >>
> > >>
> > >> "[PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS attribute":
> > >> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html
> > >>
> > >> "[PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' SysFS 
> > >> attribute":
> > >> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html
> > 
> > 
> > Will this patch work for *BSD? Roger?
> 
> At least FreeBSD don't support pci-passthrough, so none of this works
> ATM. There's no sysfs on BSD, so much of what's in libxl_pci.c will
> have to be moved to libxl_linux.c when BSD support is added.
> 

Ok. That sounds like it's OK for the initial pci 'reset' implementation in 
xl/libxl to be linux-only.. 


Thanks,

-- Pasi


> Thanks, Roger.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] flask: Add check for io{port, mem}con sorting

2018-10-03 Thread Jan Beulich
>>> "DeGraaf, Daniel G"  10/02/18 7:39 PM >>>
>> From: Jan Beulich 
>> >>> On 28.09.18 at 21:13,  wrote:
>> > These entries are not always sorted by checkpolicy.  Enforce the sorting
>> > (which can be done manually if using an unpatched checkpolicy) when
>> > loading the policy so that later uses by the security server do not
>> > incorrectly use the initial sid.
>> 
>> "Enforce the sorting" could mean two things - sorting what's unsorted,
>> or (as you do) raise an error. Isn't raising an error here possibly going
>> to impact systems which currently work?
>
>A system whose iomemcon entries are unsorted is currently not enforcing the
>intended security policy.  It normally ends up enforcing a more restrictive 
>policy,
>but not always (it depends on what you allow access to the default label). My
>guess is that anyone impacted by this problem would have noticed when they
>added the rule and it had no effect. However, I do agree this could cause an
>error on currently-working systems that do things like add iomemcon entries
>that they don't use.
>
>Are you suggesting an update to the commit message to make this breakage
>clear, or does the problem need to be fixed in the hypervisor? It would be
>possible to sort the entries as they're added, but that's not as easy as just
>detecting the mis-sort (since they're a linked list), and the policy creation
>process should have already sorted them (except that that part was missing).

I think resolving the ambiguity in the description is the minimal adjustment. If
that's what you want to go with (you're the maintainer after all), I think it 
would
suffice to suggest revised wording (or even merely your agreement for the
committer to make a respective adjustment), without necessarily re-submitting
the patch. Personally (but again, I'm not the maintainer of this code) I think 
it
would be better if the actual issue was addressed by doing the sorting. It could
be done with a warning logged, and perhaps with the warning suggesting that
the built-in sorting will/might go away again in a later release.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Vitaly Kuznetsov
Dave Hansen  writes:

> On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
>> It is more than just memmaps (e.g. forking udev process doing memory
>> onlining also needs memory) but yes, the main idea is to make the
>> onlining synchronous with hotplug.
>
> That's a good theoretical concern.
>
> But, is it a problem we need to solve in practice?

Yes, unfortunately. It was previously discovered that when we try to
hotplug tons of memory to a low memory system (this is a common scenario
with VMs) we end up with OOM because for all new memory blocks we need
to allocate page tables, struct pages, ... and we need memory to do
that. The userspace program doing memory onlining also needs memory to
run and in case it prefers to fork to handle hundreds of notfifications
... well, it may get OOMkilled before it manages to online anything.

Allocating all kernel objects from the newly hotplugged blocks would
definitely help to manage the situation but as I said this won't solve
the 'forking udev' problem completely (it will likely remain in
'extreme' cases only. We can probably work around it by onlining with a
dedicated process which doesn't do memory allocation).

-- 
Vitaly

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline baseline-only test] 75343: trouble: blocked/broken

2018-10-03 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 75343 qemu-mainline real [real]
http://osstest.xensource.com/osstest/logs/75343/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 build-i386   broken
 build-armhf-pvopsbroken
 build-i386-xsm   broken
 build-amd64-xsm  broken
 build-amd64-pvopsbroken
 build-i386-pvops broken
 build-armhf  broken

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-midway1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-chec

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Michal Hocko
On Wed 03-10-18 15:52:24, Vitaly Kuznetsov wrote:
[...]
> > As David said some of the memory cannot be onlined without further steps
> > (e.g. when it is standby as David called it) and then I fail to see how
> > eBPF help in any way.
> 
> and also, we can fight till the end of days here trying to come up with
> an onlining solution which would work for everyone and eBPF would move
> this decision to distro level.

The point is that there is _no_ general onlining solution. This is
basically policy which belongs to the userspace.
-- 
Michal Hocko
SUSE Labs

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Dave Hansen
On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
> It is more than just memmaps (e.g. forking udev process doing memory
> onlining also needs memory) but yes, the main idea is to make the
> onlining synchronous with hotplug.

That's a good theoretical concern.

But, is it a problem we need to solve in practice?


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Michal Hocko
On Tue 02-10-18 17:25:19, David Hildenbrand wrote:
> On 02/10/2018 15:47, Michal Hocko wrote:
[...]
> > Zone imbalance is an inherent problem of the highmem zone. It is
> > essentially the highmem zone we all loved so much back in 32b days.
> > Yes the movable zone doesn't have any addressing limitations so it is a
> > bit more relaxed but considering the hotplug scenarios I have seen so
> > far people just want to have full NUMA nodes movable to allow replacing
> > DIMMs. And then we are back to square one and the zone imbalance issue.
> > You have those regardless where memmaps are allocated from.
> 
> Unfortunately yes. And things get more complicated as you are adding a
> whole DIMMs and get notifications in the granularity of memory blocks.
> Usually you are not interested in onlining any memory block of that DIMM
> as MOVABLE as soon as you would have to online one memory block of that
> DIMM as NORMAL - because that can already block the whole DIMM.

For the purpose of the hotremove, yes. But as Dave has noted people are
(ab)using zone movable for other purposes - e.g. large pages.
 
[...]
> > Then the immediate question would be why to use memory hotplug for that
> > at all? Why don't you simply start with a huge pre-allocated physical
> > address space and balloon memory in an out per demand. Why do you want
> > to inject new memory during the runtime?
> 
> Let's assume you have a guest with 20GB size and eventually want to
> allow to grow it to 4TB. You would have to allocate metadata for 4TB
> right from the beginning. That's definitely now what we want. That is
> why memory hotplug is used by e.g. XEN or Hyper-V. With Hyper-V, the
> hypervisor even tells you at which places additional memory has been
> made available.

Then you have to live with the fact that your hot added memory will be
self hosted and find a way for ballooning to work with that. The price
would be that some part of the memory is not really balloonable in the
end.

> >> 1. is a reason why distributions usually don't configure
> >> "MEMORY_HOTPLUG_DEFAULT_ONLINE", because you really want the option for
> >> MOVABLE zone. That however implies, that e.g. for x86, you have to
> >> handle all new memory in user space, especially also HyperV memory.
> >> There, you then have to check for things like "isHyperV()" to decide
> >> "oh, yes, this should definitely not go to the MOVABLE zone".
> > 
> > Why do you need a generic hotplug rule in the first place? Why don't you
> > simply provide different set of rules for different usecases? Let users
> > decide which usecase they prefer rather than try to be clever which
> > almost always hits weird corner cases.
> > 
> 
> Memory hotplug has to work as reliable as we can out of the box. Letting
> the user make simple decisions like "oh, I am on hyper-V, I want to
> online memory to the normal zone" does not feel right.

Users usually know what is their usecase and then it is just a matter of
plumbing (e.g. distribution can provide proper tools to deploy those
usecases) to chose the right and for user obscure way to make it work.

> But yes, we
> should definitely allow to make modifications. So some sane default rule
> + possible modification is usually a good idea.
> 
> I think Dave has a point with using MOVABLE for huge page use cases. And
> there might be other corner cases as you correctly state.
> 
> I wonder if this patch itself minus modifying online/offline might make
> sense. We can then implement simple rules in user space
> 
> if (normal) {
>   /* customers expect hotplugged DIMMs to be unpluggable */
>   online_movable();
> } else if (paravirt) {
>   /* paravirt memory should as default always go to the NORMAL */
>   online();
> } else {
>   /* standby memory will never get onlined automatically */
> }
> 
> Compared to having to guess what is to be done (isKVM(), isHyperV,
> isS390 ...) and failing once this is no longer unique (e.g. virtio-mem
> and ACPI support for x86 KVM).

I am worried that exporing a type will just push us even further to the
corner. The current design is really simple and 2 stage and that is good
because it allows for very different usecases. The more specific the API
be the more likely we are going to hit "I haven't even dreamed somebody
would be using hotplug for this thing". And I would bet this will happen
sooner or later.

Just look at how the whole auto onlining screwed the API to workaround
an implementation detail. It has created a one purpose behavior that
doesn't suite many usecases. Yet we have to live with that because
somebody really relies on it. Let's not repeat same errors.
-- 
Michal Hocko
SUSE Labs

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [libvirt test] 128331: tolerable all pass - PUSHED

2018-10-03 Thread osstest service owner
flight 128331 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128331/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 128304
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 128304
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  d6b8838dd83697f721fe0706068df765148154de
baseline version:
 libvirt  9f81dc1081bdf02b001083bbda7257bf24d3e604

Last test of basis   128304  2018-10-02 04:18:45 Z1 days
Testing same since   128331  2018-10-03 04:18:43 Z0 days1 attempts


People who touched revisions under test:
  Ján Tomko 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   9f81dc1081..d6b8838dd8  d6b8838dd83697f721fe0706068df765148154de -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Vitaly Kuznetsov
Michal Hocko  writes:

> On Wed 03-10-18 15:38:04, Vitaly Kuznetsov wrote:
>> David Hildenbrand  writes:
>> 
>> > On 02/10/2018 15:47, Michal Hocko wrote:
>> ...
>> >> 
>> >> Why do you need a generic hotplug rule in the first place? Why don't you
>> >> simply provide different set of rules for different usecases? Let users
>> >> decide which usecase they prefer rather than try to be clever which
>> >> almost always hits weird corner cases.
>> >> 
>> >
>> > Memory hotplug has to work as reliable as we can out of the box. Letting
>> > the user make simple decisions like "oh, I am on hyper-V, I want to
>> > online memory to the normal zone" does not feel right. But yes, we
>> > should definitely allow to make modifications.
>> 
>> Last time I was thinking about the imperfectness of the auto-online
>> solution we have and any other solution we're able to suggest an idea
>> came to my mind - what if we add an eBPF attach point to the
>> auto-onlining mechanism effecively offloading decision-making to
>> userspace. We'll of couse need to provide all required data (e.g. how
>> memory blocks are aligned with physical DIMMs as it makes no sense to
>> online part of DIMM as normal and the rest as movable as it's going to
>> be impossible to unplug such DIMM anyways).
>
> And how does that differ from the notification mechanism we have? Just
> by not relying on the process scheduling? If yes then this revolves
> around the implementation detail that you care about time-to-hot-add
> vs. time-to-online. And that is a solveable problem - just allocate
> memmaps from the hot-added memory.

It is more than just memmaps (e.g. forking udev process doing memory
onlining also needs memory) but yes, the main idea is to make the
onlining synchronous with hotplug.

>
> As David said some of the memory cannot be onlined without further steps
> (e.g. when it is standby as David called it) and then I fail to see how
> eBPF help in any way.

and also, we can fight till the end of days here trying to come up with
an onlining solution which would work for everyone and eBPF would move
this decision to distro level.

-- 
Vitaly

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Michal Hocko
On Wed 03-10-18 15:38:04, Vitaly Kuznetsov wrote:
> David Hildenbrand  writes:
> 
> > On 02/10/2018 15:47, Michal Hocko wrote:
> ...
> >> 
> >> Why do you need a generic hotplug rule in the first place? Why don't you
> >> simply provide different set of rules for different usecases? Let users
> >> decide which usecase they prefer rather than try to be clever which
> >> almost always hits weird corner cases.
> >> 
> >
> > Memory hotplug has to work as reliable as we can out of the box. Letting
> > the user make simple decisions like "oh, I am on hyper-V, I want to
> > online memory to the normal zone" does not feel right. But yes, we
> > should definitely allow to make modifications.
> 
> Last time I was thinking about the imperfectness of the auto-online
> solution we have and any other solution we're able to suggest an idea
> came to my mind - what if we add an eBPF attach point to the
> auto-onlining mechanism effecively offloading decision-making to
> userspace. We'll of couse need to provide all required data (e.g. how
> memory blocks are aligned with physical DIMMs as it makes no sense to
> online part of DIMM as normal and the rest as movable as it's going to
> be impossible to unplug such DIMM anyways).

And how does that differ from the notification mechanism we have? Just
by not relying on the process scheduling? If yes then this revolves
around the implementation detail that you care about time-to-hot-add
vs. time-to-online. And that is a solveable problem - just allocate
memmaps from the hot-added memory.

As David said some of the memory cannot be onlined without further steps
(e.g. when it is standby as David called it) and then I fail to see how
eBPF help in any way.
-- 
Michal Hocko
SUSE Labs

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [freebsd-master test] 128339: all pass - PUSHED

2018-10-03 Thread osstest service owner
flight 128339 freebsd-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128339/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 freebsd  a16e14a2bb879c082d379f9ca2f201e993960b85
baseline version:
 freebsd  04d432fdc0c15f2da76dac4a9a5caf1aeb051ef0

Last test of basis   128277  2018-10-01 09:19:04 Z2 days
Testing same since   128339  2018-10-03 09:19:05 Z0 days1 attempts


People who touched revisions under test:
  0mp <0...@freebsd.org>
  ae 
  andreast 
  andrew 
  br 
  brooks 
  bz 
  emaste 
  gallatin 
  kbowling 
  ken 
  kevans 
  manu 
  markj 
  mckusick 
  mjg 
  rwatson 
  trasz 
  tuexen 

jobs:
 build-amd64-freebsd-againpass
 build-amd64-freebsd  pass
 build-amd64-xen-freebsd  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/freebsd.git
   04d432fdc0c..a16e14a2bb8  a16e14a2bb879c082d379f9ca2f201e993960b85 -> 
tested/master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4] memory_hotplug: Free pages as higher order

2018-10-03 Thread Arun KS
When free pages are done with higher order, time spend on
coalescing pages by buddy allocator can be reduced. With
section size of 256MB, hot add latency of a single section
shows improvement from 50-60 ms to less than 1 ms, hence
improving the hot add latency by 60%. Modify external
providers of online callback to align with the change.
Also remove prefetch from __free_pages_core().

Signed-off-by: Arun KS 
---
Changes since v3:
- renamed _free_pages_boot_core -> __free_pages_core.
- removed prefetch from __free_pages_core.
- removed xen_online_page().

Changes since v2:
- reuse code from __free_pages_boot_core().

Changes since v1:
- Removed prefetch().

Changes since RFC:
- Rebase.
- As suggested by Michal Hocko remove pages_per_block.
- Modifed external providers of online_page_callback.

v3: https://lore.kernel.org/patchwork/patch/992348/
v2: https://lore.kernel.org/patchwork/patch/991363/
v1: https://lore.kernel.org/patchwork/patch/989445/
RFC: https://lore.kernel.org/patchwork/patch/984754/

---
 drivers/hv/hv_balloon.c|  6 --
 drivers/xen/balloon.c  | 23 ++
 include/linux/memory_hotplug.h |  2 +-
 mm/internal.h  |  1 +
 mm/memory_hotplug.c| 44 ++
 mm/page_alloc.c| 14 +-
 6 files changed, 58 insertions(+), 32 deletions(-)

diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index b1b7880..c5bc0b5 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -771,7 +771,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned 
long size,
}
 }
 
-static void hv_online_page(struct page *pg)
+static int hv_online_page(struct page *pg, unsigned int order)
 {
struct hv_hotadd_state *has;
unsigned long flags;
@@ -783,10 +783,12 @@ static void hv_online_page(struct page *pg)
if ((pfn < has->start_pfn) || (pfn >= has->end_pfn))
continue;
 
-   hv_page_online_one(has, pg);
+   hv_bring_pgs_online(has, pfn, (1UL << order));
break;
}
spin_unlock_irqrestore(&dm_device.ha_lock, flags);
+
+   return 0;
 }
 
 static int pfn_covered(unsigned long start_pfn, unsigned long pfn_cnt)
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index e12bb25..58ddf48 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -390,8 +390,8 @@ static enum bp_state reserve_additional_memory(void)
 
/*
 * add_memory_resource() will call online_pages() which in its turn
-* will call xen_online_page() callback causing deadlock if we don't
-* release balloon_mutex here. Unlocking here is safe because the
+* will call xen_bring_pgs_online() callback causing deadlock if we
+* don't release balloon_mutex here. Unlocking here is safe because the
 * callers drop the mutex before trying again.
 */
mutex_unlock(&balloon_mutex);
@@ -411,15 +411,22 @@ static enum bp_state reserve_additional_memory(void)
return BP_ECANCELED;
 }
 
-static void xen_online_page(struct page *page)
+static int xen_bring_pgs_online(struct page *pg, unsigned int order)
 {
-   __online_page_set_limits(page);
+   unsigned long i, size = (1 << order);
+   unsigned long start_pfn = page_to_pfn(pg);
+   struct page *p;
 
+   pr_debug("Online %lu pages starting at pfn 0x%lx\n", size, start_pfn);
mutex_lock(&balloon_mutex);
-
-   __balloon_append(page);
-
+   for (i = 0; i < size; i++) {
+   p = pfn_to_page(start_pfn + i);
+   __online_page_set_limits(p);
+   __balloon_append(p);
+   }
mutex_unlock(&balloon_mutex);
+
+   return 0;
 }
 
 static int xen_memory_notifier(struct notifier_block *nb, unsigned long val, 
void *v)
@@ -744,7 +751,7 @@ static int __init balloon_init(void)
balloon_stats.max_retry_count = RETRY_UNLIMITED;
 
 #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
-   set_online_page_callback(&xen_online_page);
+   set_online_page_callback(&xen_bring_pgs_online);
register_memory_notifier(&xen_memory_nb);
register_sysctl_table(xen_root);
 
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 34a2822..7b04c1d 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -87,7 +87,7 @@ extern int test_pages_in_a_zone(unsigned long start_pfn, 
unsigned long end_pfn,
unsigned long *valid_start, unsigned long *valid_end);
 extern void __offline_isolated_pages(unsigned long, unsigned long);
 
-typedef void (*online_page_callback_t)(struct page *page);
+typedef int (*online_page_callback_t)(struct page *page, unsigned int order);
 
 extern int set_online_page_callback(online_page_callback_t callback);
 extern int restore_online_page_callback(online_page_callback_t callback);
diff --git a/mm/internal.h b/mm/internal.h
index 87256ae..

Re: [Xen-devel] [PATCH] tools/pvh: set coherent MTRR state for all vCPUs

2018-10-03 Thread Wei Liu
On Tue, Oct 02, 2018 at 06:36:14PM +0200, Roger Pau Monne wrote:
> Instead of just doing it for the BSP. This requires storing the
> maximum number of possible vCPUs in xc_dom_image.
> 
> This has been a latent bug so far because PVH doesn't yet support
> pci-passthrough, so the effective memory cache attribute is forced to
> WB by the hypervisor. Note also that even without this in place vCPU#0
> is preferred in certain scenarios in order to calculate the memory
> cache attributes.
> 
> Reported-by: Andrew Cooper 
> Signed-off-by: Roger Pau Monné 
[...]
> ---
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 8a8a32c699..5c80aab767 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -803,6 +803,7 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
>  dom->xenstore_evtchn = state->store_port;
>  dom->xenstore_domid = state->store_domid;
>  dom->claim_enabled = libxl_defbool_val(info->claim_mode);
> +dom->nr_vcpus = info->max_vcpus;

This isn't strictly needed, but I think setting it for PV as well is
more consistent.


Acked-by: Wei Liu 

Andrew, can you give this a try?

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm/memory_hotplug: Introduce memory block types

2018-10-03 Thread Vitaly Kuznetsov
David Hildenbrand  writes:

> On 02/10/2018 15:47, Michal Hocko wrote:
...
>> 
>> Why do you need a generic hotplug rule in the first place? Why don't you
>> simply provide different set of rules for different usecases? Let users
>> decide which usecase they prefer rather than try to be clever which
>> almost always hits weird corner cases.
>> 
>
> Memory hotplug has to work as reliable as we can out of the box. Letting
> the user make simple decisions like "oh, I am on hyper-V, I want to
> online memory to the normal zone" does not feel right. But yes, we
> should definitely allow to make modifications.

Last time I was thinking about the imperfectness of the auto-online
solution we have and any other solution we're able to suggest an idea
came to my mind - what if we add an eBPF attach point to the
auto-onlining mechanism effecively offloading decision-making to
userspace. We'll of couse need to provide all required data (e.g. how
memory blocks are aligned with physical DIMMs as it makes no sense to
online part of DIMM as normal and the rest as movable as it's going to
be impossible to unplug such DIMM anyways).

-- 
Vitaly

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] libxl: modify domain config when moving domain to another cpupool

2018-10-03 Thread George Dunlap
On Wed, Oct 3, 2018 at 12:45 PM George Dunlap  wrote:
>
> On Wed, Oct 3, 2018 at 12:29 PM Wei Liu  wrote:
> >
> > On Wed, Oct 03, 2018 at 12:02:24PM +0100, George Dunlap wrote:
> > > On Tue, Oct 2, 2018 at 3:20 PM Juergen Gross  wrote:
> > > >
> > > > Today the domain config info contains the cpupool name the domain was
> > > > started in only if the cpupool was specified at domain creation. Moving
> > > > the domain to another cpupool later won't change that information.
> > > >
> > > > Correct that by modifying the domain config accordingly.
> > > >
> > > > Signed-off-by: Juergen Gross 
> > >
> > > Would it be better to do this the same way the scheduling parameters
> > > was done -- by adding this to libxl_retrieve_domain_configuration()?
> > > That way the cpupool would show up in `xl list -l` as well (I think).
> >
> > This already modifies the saved state file, there will not be mismatch
> > between the saved state and the state in hypervisor. `xl list -l` should
> > work just fine.
>
> If you do it Juergens way, `xl list -l` will show things you have
> *changed*, but not the defaults.  If you do it the way the scheduling
> parameters was done, the pool name will be shown even if there was no
> pool specified in the config file, nor the vm migrated from the
> default pool to a different one.

But of course, we have the same problem that if the cpupool doesn't
exist on the far side, the migration will fail (I'm guessing?).  I
think this is surprising, and at very least undocumented.  Is it worth
considering having it fall back to the default cpupool instead?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH V6] x86/altp2m: Add a subop for obtaining the mem access of a page

2018-10-03 Thread Wei Liu
On Thu, Sep 27, 2018 at 10:58:54AM +0300, Razvan Cojocaru wrote:
> Currently there is a subop for setting the memaccess of a page, but not
> for consulting it.  The new HVMOP_altp2m_get_mem_access adds this
> functionality.
> 
> Both altp2m get/set mem access functions use the struct
> xen_hvm_altp2m_mem_access which has now dropped the `set' part and has
> been renamed from xen_hvm_altp2m_set_mem_access.
> 
> Signed-off-by: Adrian Pop 
> Signed-off-by: Razvan Cojocaru 
> 
> ---
> Changes since V5:
>  - Fixed the build by conditionally-compiling the altp2m code
>gated on CONFIG_HVM being #defined.
> ---
>  tools/libxc/include/xenctrl.h   |  3 +++
>  tools/libxc/xc_altp2m.c | 33 ++---

Acked-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] libxl: Restore scheduling parameters after migrate in best-effort fashion

2018-10-03 Thread Dario Faggioli
On Wed, 2018-10-03 at 12:53 +0200, Juergen Gross wrote:
> On 02/10/2018 17:49, George Dunlap wrote:
> > Commit 3b4adba ("tools/libxl: include scheduler parameters in the
> > output of xl list -l") added scheduling parameters to the set of
> > information collected by libxl_retrieve_domain_configuration(), in
> > order to report that information in `xl list -l`.
> > 
> > Unfortunately, libxl_retrieve_domain_configuration() is also called
> > by
> > the migration / save code, and the results passed to the restore /
> > receive code.  This meant scheduler parameters were inadvertently
> > added to the migration stream, without proper consideration for how
> > to
> > handle corner cases.  The result was that if migrating from a host
> > running one scheduler to a host running a different scheduler, the
> > migration would fail with an error like the following:
> > 
> > libxl: error: libxl_sched.c:232:sched_credit_domain_set: Domain
> > 1:Getting domain sched credit: Invalid argument
> > libxl: error: libxl_create.c:1275:domcreate_rebuild_done: Domain
> > 1:cannot (re-)build domain: -3
> > 
> > Luckily there's a fairly straightforward way to set parameters in a
> > "best-effort" fashion.  libxl provides a single struct containing
> > the
> > parameters of all schedulers, as well as a parameter specifying
> > which
> > scheduler.  Parameters not used by a given scheduler are ignored.
> > Additionally, the struct contains a parameter to specify the
> > scheduler.  If you specify a specific scheduler,
> > libxl_domain_sched_params_set() will fail if there's a different
> > scheduler.  However, if you pass LIBXL_SCHEDULER_UNKNOWN, it will
> > use
> > the value of the current scheduler for that domain.
> > 
> > In domcreate_stream_done(), before calling libxl__build_post(), set
> > the scheduler to LIBXL_SCHEDULER_UNKNOWN.  This will propagate
> > scheduler parameters from the previous instantiation on a best-
> > effort
> > basis.
> > 
> > Signed-off-by: George Dunlap 
> 
> Acked-by: Juergen Gross 
> 
Reviewed-by: Dario Faggioli 

Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/


signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] PV guests and APIC interaction

2018-10-03 Thread Andrew Cooper
Hello,

A bug has recently been discovered internally, where a 4.14 dom0 was
observed to be doing this:

(XEN) [   16.035377] emul-priv-op.c:1166:d0v0 Domain attempted WRMSR 001b 
from 0xfee00d00 to 0xfee00100
(XEN) [   16.035392] emul-priv-op.c:1166:d0v0 Domain attempted WRMSR 001b 
from 0xfee00d00 to 0xfee00900
...
(XEN) [   18.798336] emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 001b 
from 0xfee00c00 to 0xfee0
(XEN) [   18.798350] emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 001b 
from 0xfee00c00 to 0xfee00800

This is dom0 finding x2apic enabled in the APIC, and trying to cycle it
around to xapic mode, and raises multiple issues.

First and foremost, PV guests don't have an APIC and shouldn't be
playing with it at all.

It turns out that Xen advertise the hardware APIC bit to PV guests,
which isn't necessarily always set.  On top of that, the default
read/write-ignore behaviour of MSR lets Linux get into a position where
it thinks it is actually making real changes to the APIC mode.

Architecturally speaking, if we offer the APIC bit, we should honour
read/write requests correctly.  Obviously, this isn't a viable option -
hiding the APIC bit and raising #GP's is the only
architecturally-correct way to do this.

Given that we've already played "how much does Linux explode if it
thinks there is no APIC", does anyone have any suggestions for how to
resolve this without breaking Linux?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] libxl: modify domain config when moving domain to another cpupool

2018-10-03 Thread George Dunlap
On Wed, Oct 3, 2018 at 12:29 PM Wei Liu  wrote:
>
> On Wed, Oct 03, 2018 at 12:02:24PM +0100, George Dunlap wrote:
> > On Tue, Oct 2, 2018 at 3:20 PM Juergen Gross  wrote:
> > >
> > > Today the domain config info contains the cpupool name the domain was
> > > started in only if the cpupool was specified at domain creation. Moving
> > > the domain to another cpupool later won't change that information.
> > >
> > > Correct that by modifying the domain config accordingly.
> > >
> > > Signed-off-by: Juergen Gross 
> >
> > Would it be better to do this the same way the scheduling parameters
> > was done -- by adding this to libxl_retrieve_domain_configuration()?
> > That way the cpupool would show up in `xl list -l` as well (I think).
>
> This already modifies the saved state file, there will not be mismatch
> between the saved state and the state in hypervisor. `xl list -l` should
> work just fine.

If you do it Juergens way, `xl list -l` will show things you have
*changed*, but not the defaults.  If you do it the way the scheduling
parameters was done, the pool name will be shown even if there was no
pool specified in the config file, nor the vm migrated from the
default pool to a different one.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] libxl: modify domain config when moving domain to another cpupool

2018-10-03 Thread Wei Liu
On Wed, Oct 03, 2018 at 12:02:24PM +0100, George Dunlap wrote:
> On Tue, Oct 2, 2018 at 3:20 PM Juergen Gross  wrote:
> >
> > Today the domain config info contains the cpupool name the domain was
> > started in only if the cpupool was specified at domain creation. Moving
> > the domain to another cpupool later won't change that information.
> >
> > Correct that by modifying the domain config accordingly.
> >
> > Signed-off-by: Juergen Gross 
> 
> Would it be better to do this the same way the scheduling parameters
> was done -- by adding this to libxl_retrieve_domain_configuration()?
> That way the cpupool would show up in `xl list -l` as well (I think).

This already modifies the saved state file, there will not be mismatch
between the saved state and the state in hypervisor. `xl list -l` should
work just fine.

Wei.

> 
>  -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 00/12] add per-domain and per-cpupool generic parameters

2018-10-03 Thread Wei Liu
On Wed, Oct 03, 2018 at 01:07:30PM +0200, Juergen Gross wrote:
> On 03/10/2018 13:00, Wei Liu wrote:
> > On Wed, Sep 26, 2018 at 07:30:38PM +0200, Dario Faggioli wrote:
> >> On Fri, 2018-09-21 at 09:52 +0100, Wei Liu wrote:
> >>> On Fri, Sep 21, 2018 at 07:23:23AM +0200, Juergen Gross wrote:
>  On 20/09/18 18:06, Wei Liu wrote:
> >
> > It appears that the implementation in patch 10 concatenates the
> > new
> > settings to the old ones. It is not very nice imo.
> >
> > If for the life time of the domain you set X times the same
> > parameter
> > you get a string of foo=bar1 foo=bar2 in the saved config file.
> >
> > There is probably a simple solution: make the parameter list in
> > IDL a
> > key value list. You then update the list accordingly.
> 
>  The problem with that approach are parameters with sub-parameters:
> 
>  par=sub1=no,sub2=yes
>  par=sub2=yes
> >>>
> >>> There is another way to solve this: further parse the sub-parameters.
> >>> This doesn't require any parameter specific knowledge and there are
> >>> already functions to split strings.
> >>>
> >> I'm not sure whether we're saying the same thing or not, but can't we,
> >> when parameter 'foo', which has been set to 'bar1' already, is being
> >> set to 'bar2', search d_config.b_info.parameters for the substring
> >> containing 'foo=bar1', replace it with 'foo=bar2', and save d_config
> >> again?
> > 
> > This can do, too. It is still parsing so the amount of work needed is
> > more or less the same to me.
> 
> No, this isn't always correct. Think of console=tty0 console=hvc0 in
> the linux kernel. You don't want hvc0 to replace tty0, but to have
> both.

Good point.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4] x86: use VMLOAD for PV context switch

2018-10-03 Thread Wei Liu
On Wed, Sep 26, 2018 at 01:42:09AM -0600, Jan Beulich wrote:
> Having noticed that VMLOAD alone is about as fast as a single of the
> involved WRMSRs, I thought it might be a reasonable idea to also use it
> for PV. Measurements, however, have shown that an actual improvement can
> be achieved only with an early prefetch of the VMCB (thanks to Andrew
> for suggesting to try this), which I have to admit I can't really
> explain. This way on my Fam15 box context switch takes over 100 clocks
> less on average (the measured values are heavily varying in all cases,
> though).
> 
> This is intentionally not using a new hvm_funcs hook: For one, this is
> all about PV, and something similar can hardly be done for VMX.
> Furthermore the indirect to direct call patching that is meant to be
> applied to most hvm_funcs hooks would be ugly to make work with
> functions having more than 6 parameters.
> 
> Signed-off-by: Jan Beulich 
> Acked-by: Brian Woods 
> Reviewed-by: Boris Ostrovsky 

Reviewed-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 00/12] add per-domain and per-cpupool generic parameters

2018-10-03 Thread Juergen Gross
On 03/10/2018 13:00, Wei Liu wrote:
> On Wed, Sep 26, 2018 at 07:30:38PM +0200, Dario Faggioli wrote:
>> On Fri, 2018-09-21 at 09:52 +0100, Wei Liu wrote:
>>> On Fri, Sep 21, 2018 at 07:23:23AM +0200, Juergen Gross wrote:
 On 20/09/18 18:06, Wei Liu wrote:
>
> It appears that the implementation in patch 10 concatenates the
> new
> settings to the old ones. It is not very nice imo.
>
> If for the life time of the domain you set X times the same
> parameter
> you get a string of foo=bar1 foo=bar2 in the saved config file.
>
> There is probably a simple solution: make the parameter list in
> IDL a
> key value list. You then update the list accordingly.

 The problem with that approach are parameters with sub-parameters:

 par=sub1=no,sub2=yes
 par=sub2=yes
>>>
>>> There is another way to solve this: further parse the sub-parameters.
>>> This doesn't require any parameter specific knowledge and there are
>>> already functions to split strings.
>>>
>> I'm not sure whether we're saying the same thing or not, but can't we,
>> when parameter 'foo', which has been set to 'bar1' already, is being
>> set to 'bar2', search d_config.b_info.parameters for the substring
>> containing 'foo=bar1', replace it with 'foo=bar2', and save d_config
>> again?
> 
> This can do, too. It is still parsing so the amount of work needed is
> more or less the same to me.

No, this isn't always correct. Think of console=tty0 console=hvc0 in
the linux kernel. You don't want hvc0 to replace tty0, but to have
both.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] libxl: modify domain config when moving domain to another cpupool

2018-10-03 Thread George Dunlap
On Tue, Oct 2, 2018 at 3:20 PM Juergen Gross  wrote:
>
> Today the domain config info contains the cpupool name the domain was
> started in only if the cpupool was specified at domain creation. Moving
> the domain to another cpupool later won't change that information.
>
> Correct that by modifying the domain config accordingly.
>
> Signed-off-by: Juergen Gross 

Would it be better to do this the same way the scheduling parameters
was done -- by adding this to libxl_retrieve_domain_configuration()?
That way the cpupool would show up in `xl list -l` as well (I think).

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Weird altp2m behaviour when switching early to a new view

2018-10-03 Thread Razvan Cojocaru
On 10/3/18 1:56 PM, Сергей wrote:
>> No yet, we're working on it.
> Could You point me to the branch with Your patches please? I Could not find 
> it in https://xenbits.xen.org/gitweb/?p=xen.git

There's no public branch with my patches, I'm working locally. The
original patch has now split into two patches, the first one of which is
currently "[PATCH V3] x86/altp2m: propagate ept.ad changes to all active
altp2ms" (pending review on xen-devel), and the second one depends on
the final form of the first one and is not yet completely written.


Razvan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 00/12] add per-domain and per-cpupool generic parameters

2018-10-03 Thread Wei Liu
On Wed, Sep 26, 2018 at 07:30:38PM +0200, Dario Faggioli wrote:
> On Fri, 2018-09-21 at 09:52 +0100, Wei Liu wrote:
> > On Fri, Sep 21, 2018 at 07:23:23AM +0200, Juergen Gross wrote:
> > > On 20/09/18 18:06, Wei Liu wrote:
> > > > 
> > > > It appears that the implementation in patch 10 concatenates the
> > > > new
> > > > settings to the old ones. It is not very nice imo.
> > > > 
> > > > If for the life time of the domain you set X times the same
> > > > parameter
> > > > you get a string of foo=bar1 foo=bar2 in the saved config file.
> > > > 
> > > > There is probably a simple solution: make the parameter list in
> > > > IDL a
> > > > key value list. You then update the list accordingly.
> > > 
> > > The problem with that approach are parameters with sub-parameters:
> > > 
> > > par=sub1=no,sub2=yes
> > > par=sub2=yes
> > 
> > There is another way to solve this: further parse the sub-parameters.
> > This doesn't require any parameter specific knowledge and there are
> > already functions to split strings.
> > 
> I'm not sure whether we're saying the same thing or not, but can't we,
> when parameter 'foo', which has been set to 'bar1' already, is being
> set to 'bar2', search d_config.b_info.parameters for the substring
> containing 'foo=bar1', replace it with 'foo=bar2', and save d_config
> again?

This can do, too. It is still parsing so the amount of work needed is
more or less the same to me.

Wei.

> 
> Regards,
> Dario
> -- 
> <> (Raistlin Majere)
> -
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Software Engineer @ SUSE https://www.suse.com/



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 00/12] add per-domain and per-cpupool generic parameters

2018-10-03 Thread Wei Liu
On Thu, Sep 27, 2018 at 07:58:39AM +0200, Juergen Gross wrote:
> On 21/09/18 10:52, Wei Liu wrote:
> > On Fri, Sep 21, 2018 at 07:23:23AM +0200, Juergen Gross wrote:
> >> On 20/09/18 18:06, Wei Liu wrote:
> >>> On Wed, Sep 19, 2018 at 07:58:50PM +0200, Juergen Gross wrote:
> 
>  Did you look into the patches, especially patch 10? The parameters set
>  are all stored in domain config via libxl__arch_domain_save_config().
> >>>
> >>> No, I didn't.
> >>>
> >>> I think the general idea of what you do in patch 10 should work. However
> >>> I want to comment on the implementation.
> >>>
> >>> It appears that the implementation in patch 10 concatenates the new
> >>> settings to the old ones. It is not very nice imo.
> >>>
> >>> If for the life time of the domain you set X times the same parameter
> >>> you get a string of foo=bar1 foo=bar2 in the saved config file.
> >>>
> >>> There is probably a simple solution: make the parameter list in IDL a
> >>> key value list. You then update the list accordingly.
> >>
> >> The problem with that approach are parameters with sub-parameters:
> >>
> >> par=sub1=no,sub2=yes
> >> par=sub2=yes
> > 
> > That means the value type of the top level key value list should ideally
> > be another key value list. I do notice the limitation in the key value
> > list type: the value can only be string.
> > 
> > There is another way to solve this: further parse the sub-parameters.
> > This doesn't require any parameter specific knowledge and there are
> > already functions to split strings.
> 
> I don't think this will work for the general case. It might be that
> 
> par=no
> 
> will switch off all sub-parameters. How would you handle that?

Isn't that what it is supposed to do? Do you want par=no to not turn par
completely off but leave part(s) of par on? Then how do you turn par off
completely?

Wei.

> 
> I'm looking into a way to report the current parameter settings.
> 
> 
> Juergen
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Weird altp2m behaviour when switching early to a new view

2018-10-03 Thread Сергей
> No yet, we're working on it.
Could You point me to the branch with Your patches please? I Could not find it 
in https://xenbits.xen.org/gitweb/?p=xen.git

With best regards
Sergey Kovalev.
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] libxl: Restore scheduling parameters after migrate in best-effort fashion

2018-10-03 Thread Juergen Gross
On 02/10/2018 17:49, George Dunlap wrote:
> Commit 3b4adba ("tools/libxl: include scheduler parameters in the
> output of xl list -l") added scheduling parameters to the set of
> information collected by libxl_retrieve_domain_configuration(), in
> order to report that information in `xl list -l`.
> 
> Unfortunately, libxl_retrieve_domain_configuration() is also called by
> the migration / save code, and the results passed to the restore /
> receive code.  This meant scheduler parameters were inadvertently
> added to the migration stream, without proper consideration for how to
> handle corner cases.  The result was that if migrating from a host
> running one scheduler to a host running a different scheduler, the
> migration would fail with an error like the following:
> 
> libxl: error: libxl_sched.c:232:sched_credit_domain_set: Domain 1:Getting 
> domain sched credit: Invalid argument
> libxl: error: libxl_create.c:1275:domcreate_rebuild_done: Domain 1:cannot 
> (re-)build domain: -3
> 
> Luckily there's a fairly straightforward way to set parameters in a
> "best-effort" fashion.  libxl provides a single struct containing the
> parameters of all schedulers, as well as a parameter specifying which
> scheduler.  Parameters not used by a given scheduler are ignored.
> Additionally, the struct contains a parameter to specify the
> scheduler.  If you specify a specific scheduler,
> libxl_domain_sched_params_set() will fail if there's a different
> scheduler.  However, if you pass LIBXL_SCHEDULER_UNKNOWN, it will use
> the value of the current scheduler for that domain.
> 
> In domcreate_stream_done(), before calling libxl__build_post(), set
> the scheduler to LIBXL_SCHEDULER_UNKNOWN.  This will propagate
> scheduler parameters from the previous instantiation on a best-effort
> basis.
> 
> Signed-off-by: George Dunlap 

Acked-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] libxl: Restore scheduling parameters after migrate in best-effort fashion

2018-10-03 Thread George Dunlap
On 10/02/2018 04:49 PM, George Dunlap wrote:
> Commit 3b4adba ("tools/libxl: include scheduler parameters in the
> output of xl list -l") added scheduling parameters to the set of
> information collected by libxl_retrieve_domain_configuration(), in
> order to report that information in `xl list -l`.
> 
> Unfortunately, libxl_retrieve_domain_configuration() is also called by
> the migration / save code, and the results passed to the restore /
> receive code.  This meant scheduler parameters were inadvertently
> added to the migration stream, without proper consideration for how to
> handle corner cases.  The result was that if migrating from a host
> running one scheduler to a host running a different scheduler, the
> migration would fail with an error like the following:
> 
> libxl: error: libxl_sched.c:232:sched_credit_domain_set: Domain 1:Getting 
> domain sched credit: Invalid argument
> libxl: error: libxl_create.c:1275:domcreate_rebuild_done: Domain 1:cannot 
> (re-)build domain: -3
> 
> Luckily there's a fairly straightforward way to set parameters in a
> "best-effort" fashion.  libxl provides a single struct containing the
> parameters of all schedulers, as well as a parameter specifying which
> scheduler.  Parameters not used by a given scheduler are ignored.
> Additionally, the struct contains a parameter to specify the
> scheduler.  If you specify a specific scheduler,
> libxl_domain_sched_params_set() will fail if there's a different
> scheduler.  However, if you pass LIBXL_SCHEDULER_UNKNOWN, it will use
> the value of the current scheduler for that domain.
> 
> In domcreate_stream_done(), before calling libxl__build_post(), set
> the scheduler to LIBXL_SCHEDULER_UNKNOWN.  This will propagate
> scheduler parameters from the previous instantiation on a best-effort
> basis.
> 
> Signed-off-by: George Dunlap 

I've tested this with save/restore now, and it works fine (including
changing the weight of the VM from default and having the updated weight
show up in the other scheduler on restore).

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 3/3] tools/libxl: Switch Arm guest type to PVH

2018-10-03 Thread Wei Liu
On Mon, Oct 01, 2018 at 07:57:21PM +0100, Julien Grall wrote:
> Currently, the toolstack is considering Arm guest always PV. However,
> they are very similar to PVH because HW virtualization extension are used
> and QEMU is not started. So switch Arm guest type to PVH.
> 
> To keep compatibility with toolstack creating Arm guest with PV type
> (e.g libvirt), libxl will now convert those guests to PVH.
> 
> Furthermore, the default type for Arm in xl will now be PVH to allow
> smooth transition for user.
> 
> Signed-off-by: Julien Grall 

Acked-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v13 3/9] iommu: push use of type-safe DFN and MFN into iommu_ops

2018-10-03 Thread Suthikulpanit, Suravee


On 10/3/18 12:00 AM, Paul Durrant wrote:
> This patch modifies the methods in struct iommu_ops to use type-safe DFN
> and MFN. This follows on from the prior patch that modified the functions
> exported in xen/iommu.h.
> 
> Signed-off-by: Paul Durrant
> Reviewed-by: Wei Liu
> Reviewed-by: Kevin Tian
> Reviewed-by: Roger Pau Monne
> Acked-by: Jan Beulich
> ---
> Cc: Suravee Suthikulpanit
> Cc: Andrew Cooper
> Cc: George Dunlap

Acked-by: Suravee Suthikulpanit 

Thanks,
Suravee
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/3] tools/libxl: Deprecate PV fields kernel, ramdisk, cmdline

2018-10-03 Thread Wei Liu
On Mon, Oct 01, 2018 at 07:57:19PM +0100, Julien Grall wrote:
> The PV fields kernel, ramdisk, cmdline are only there for compatibility
> with old toolstack. Instead of manually copying them over to there new
> field, use the deprecated_by attribute in the IDL.
> 
> Suggested-by: Roger Pau Monné 
> Signed-off-by: Julien Grall 

Acked-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] mm/page_alloc: add bootscrub=idle cmdline option

2018-10-03 Thread Sergey Dyasli
Scrubbing RAM during boot may take a long time on machines with lots
of RAM. Add 'idle' option which marks all pages dirty initially so they
would eventually be scrubbed in idle-loop on every online CPU.

Performance of idle-loop scrubbing is worse than bootscrub but it's
guaranteed that the allocator will return scrubbed pages by doing
eager scrubbing during allocation (unless MEMF_no_scrub was provided).

Signed-off-by: Sergey Dyasli 
---
CC: Andrew Cooper 
CC: Boris Ostrovsky 
CC: George Dunlap 
CC: Jan Beulich 
CC: Julien Grall 
CC: Tim Deegan 
---
 docs/misc/xen-command-line.markdown |  7 +-
 xen/common/page_alloc.c | 36 +
 2 files changed, 38 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index 1ffd586224..4c60905837 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -227,7 +227,7 @@ that byte `0x12345678` is bad, you would place 
`badpage=0x12345` on
 Xen's command line.
 
 ### bootscrub
-> `= `
+> `=  | idle`
 
 > Default: `true`
 
@@ -235,6 +235,11 @@ Scrub free RAM during boot.  This is a safety feature to 
prevent
 accidentally leaking sensitive VM data into other VMs if Xen crashes
 and reboots.
 
+In `idle` mode, RAM is scrubbed in background on all CPUs during idle-loop
+with a guarantee that memory allocations always provide scrubbed pages.
+This option reduces boot time on machines with a large amount of RAM while
+still providing security benefits.
+
 ### bootscrub\_chunk
 > `= `
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 16e1b0c357..c85f44874a 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -161,8 +161,32 @@ string_param("badpage", opt_badpage);
 /*
  * no-bootscrub -> Free pages are not zeroed during boot.
  */
-static bool_t opt_bootscrub __initdata = 1;
-boolean_param("bootscrub", opt_bootscrub);
+enum {
+BOOTSCRUB_OFF = 0,
+BOOTSCRUB_ON,
+BOOTSCRUB_IDLE,
+};
+static int __read_mostly opt_bootscrub = BOOTSCRUB_ON;
+static int __init parse_bootscrub_param(const char *s)
+{
+if ( *s == '\0' )
+return 0;
+
+if ( !strcmp(s, "idle") )
+opt_bootscrub = BOOTSCRUB_IDLE;
+else
+opt_bootscrub = parse_bool(s, NULL);
+
+if ( opt_bootscrub < 0 )
+{
+opt_bootscrub = BOOTSCRUB_ON;
+return -EINVAL;
+}
+
+return 0;
+}
+
+custom_param("bootscrub", parse_bootscrub_param);
 
 /*
  * bootscrub_chunk -> Amount of bytes to scrub lockstep on non-SMT CPUs
@@ -1763,7 +1787,8 @@ static void init_heap_pages(
 nr_pages -= n;
 }
 
-free_heap_pages(pg + i, 0, scrub_debug);
+free_heap_pages(pg + i, 0, scrub_debug ||
+   opt_bootscrub == BOOTSCRUB_IDLE);
 }
 }
 
@@ -2039,8 +2064,11 @@ void __init heap_init_late(void)
  */
 setup_low_mem_virq();
 
-if ( opt_bootscrub )
+if ( opt_bootscrub == BOOTSCRUB_ON )
 scrub_heap_pages();
+else if ( opt_bootscrub == BOOTSCRUB_IDLE )
+printk("Scrubbing Free RAM on %d nodes in background\n",
+   num_online_nodes());
 }
 
 
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v13 1/9] iommu: introduce the concept of DFN...

2018-10-03 Thread Suthikulpanit, Suravee
Hi,

On 10/3/18 12:00 AM, Paul Durrant wrote:
> ...meaning 'device DMA frame number' i.e. a frame number mapped in the IOMMU
> (rather than the MMU) and hence used for DMA address translation.
> 
> This patch is a largely cosmetic change that substitutes the terms 'gfn'
> and 'gaddr' for 'dfn' and 'daddr' in all the places where the frame number
> or address relate to a device rather than the CPU.
> 
> The parts that are not purely cosmetic are:
> 
>   - the introduction of a type-safe declaration of dfn_t and definition of
> INVALID_DFN to make the substitution of gfn_x(INVALID_GFN) mechanical.
>   - the introduction of __dfn_to_daddr and __daddr_to_dfn (and type-safe
> variants without the leading __) with some use of the former.
> 
> Subsequent patches will convert code to make use of type-safe DFNs.
> 
> Signed-off-by: Paul Durrant
> Acked-by: Jan Beulich
> Reviewed-by: Kevin Tian
> Acked-by: Julien Grall
> ---
> Cc: Wei Liu
> Cc: Suravee Suthikulpanit
> Cc: Stefano Stabellini

Acked-by: Suravee Suthikulpanit 

Thanks,
Suravee
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] xl: add target cpupool parameter to xl migrate

2018-10-03 Thread Ian Jackson
Wei Liu writes ("Re: [Xen-devel] [PATCH 2/2] xl: add target cpupool parameter 
to xl migrate"):
> On Tue, Oct 02, 2018 at 05:08:27PM +0200, Juergen Gross wrote:
> > And TBH: I consider the -C option being quite dangerous. While I can
> > understand why it is present it is still a rather hacky approach for a
> > general problem. Same applies to the capability to modify random
> > settings of the domain config.
> 
> The -C option is rather dangerous. It disregards all guests states. You
> will likely to lose your mac address unless you have set the same mac
> address in the override file. Same goes for other states libxl might
> have saved during domain creation.

We should have a way to *alter* config settings rather than only one
to replace the config.  That is more useful, and less dangerous.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-coverity test] 128338: all pass - PUSHED

2018-10-03 Thread osstest service owner
flight 128338 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128338/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen  54ec59f6b0b363c34cf1864d5214a05e35ea75ee
baseline version:
 xen  edb4724e36256c495a6aa3cf1a12722efe271f9d

Last test of basis   128253  2018-09-30 09:18:36 Z3 days
Testing same since   128338  2018-10-03 09:19:02 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  George Dunlap 
  Julien Grall 
  Marc Zyngier 
  Roger Pau Monné 
  Shameer Kolothum 
  Stefano Stabellini 
  Suravee Suthikulpanit 
  Volodymyr Babchuk 
  Wei Liu 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   edb4724e36..54ec59f6b0  54ec59f6b0b363c34cf1864d5214a05e35ea75ee -> 
coverity-tested/smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [distros-debian-squeeze test] 75342: trouble: blocked/broken

2018-10-03 Thread Platform Team regression test user
flight 75342 distros-debian-squeeze real [real]
http://osstest.xensource.com/osstest/logs/75342/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvopsbroken
 build-i386   broken
 build-amd64-pvopsbroken
 build-armhf  broken
 build-amd64  broken
 build-i386-pvops broken

Tests which did not succeed, but are not blocking:
 test-amd64-i386-i386-squeeze-netboot-pygrub  1 build-check(1)  blocked n/a
 test-amd64-amd64-i386-squeeze-netboot-pygrub  1 build-check(1) blocked n/a
 test-amd64-i386-amd64-squeeze-netboot-pygrub  1 build-check(1) blocked n/a
 test-amd64-amd64-amd64-squeeze-netboot-pygrub  1 build-check(1)blocked n/a
 build-armhf-pvops 4 host-install(4)  broken like 75294
 build-armhf   4 host-install(4)  broken like 75294
 build-amd64-pvops 4 host-install(4)  broken like 75294
 build-amd64   4 host-install(4)  broken like 75294
 build-i3864 host-install(4)  broken like 75294
 build-i386-pvops  4 host-install(4)  broken like 75294

baseline version:
 flight   75294

jobs:
 build-amd64  broken  
 build-armhf  broken  
 build-i386   broken  
 build-amd64-pvopsbroken  
 build-armhf-pvopsbroken  
 build-i386-pvops broken  
 test-amd64-amd64-amd64-squeeze-netboot-pygrubblocked 
 test-amd64-i386-amd64-squeeze-netboot-pygrub blocked 
 test-amd64-amd64-i386-squeeze-netboot-pygrub blocked 
 test-amd64-i386-i386-squeeze-netboot-pygrub  blocked 



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xensource.com/osstest/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Xen PV: Sample new PV driver for buffer sharing between domains

2018-10-03 Thread Julien Grall



On 10/02/2018 11:03 AM, Omkar Bolla wrote:

Hi,

Thanks,
Basic state change is working now, after using above script.

As I said, I want to share buffer between two domains.
Could you please suggest outlines, how can I share buffer between 2 
domains(Guest and Host)?


My question on a previous e-mail was left unanswered. Do you have 
requirements to share the buffer dynamically?


If not, you may want to have a look at "Allow setting up shared memory 
areas between VMs from xl config files" [2]. We aim to merge it in the 
next Xen release.


Cheers,

[2] https://lists.xen.org/archives/html/xen-devel/2018-08/msg00883.html

This message contains confidential information and is intended only for 
the individual(s) named.If you are not the intended recipient, you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this mail and attached file/s is strictly  
prohibited. Please notify the sender immediately and delete this e-mail 
from your system. E-mail transmission cannot be guaranteed to be secured 
or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. The sender 
therefore does not accept liability for any errors or omissions in the 
contents of this message, which arise as a result of e-mail transmission.




Please configure your client to remove your disclaimer company.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] xl: add target cpupool parameter to xl migrate

2018-10-03 Thread Wei Liu
On Tue, Oct 02, 2018 at 03:43:37PM +0100, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [Xen-devel] [PATCH 2/2] xl: add target cpupool 
> parameter to xl migrate"):
> > Is this the wisest way to extend the interface?  We already have -C to
> > specify new configuration, and only have 26*2 short options to use.
> 
> Very good question.
> 
> > What if the user could supply a xl.cfg snippet on the command line to be
> > merged over the existing configuration?  That would allow any parameter
> > to be changed, rather than just the cpupool.
> 
> +1

This is a good idea. It would be a worthwhile thing to do on its own.

We already have config-update, which at the moment is to replace domain
configuration wholesale. We can extend that command to merge fields
instead.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] xl: add target cpupool parameter to xl migrate

2018-10-03 Thread Wei Liu
On Tue, Oct 02, 2018 at 05:08:27PM +0200, Juergen Gross wrote:
> On 02/10/2018 16:42, Andrew Cooper wrote:
> > On 02/10/18 15:19, Juergen Gross wrote:
> >> Add an option to specify the cpupool on the target machine when doing
> >> a migration of a domain. Currently a domain is always migrated to the
> >> cpupool with the same name as on the source machine.
> >>
> >> Specifying "-c " will migrate the domain to the specified
> >> cpupool on the target machine. Specifying an empty string for 
> >> will use the default cpupool (normally "Pool-0") on the target machine.
> >>
> >> Signed-off-by: Juergen Gross 
> >> ---
> >>  docs/man/xl.pod.1.in  |  5 +
> >>  tools/xl/xl.h |  1 +
> >>  tools/xl/xl_cmdtable.c|  3 +++
> >>  tools/xl/xl_migrate.c | 15 ++-
> >>  tools/xl/xl_saverestore.c | 10 +-
> >>  5 files changed, 28 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/docs/man/xl.pod.1.in b/docs/man/xl.pod.1.in
> >> index b74764dcd3..62f7c0f039 100644
> >> --- a/docs/man/xl.pod.1.in
> >> +++ b/docs/man/xl.pod.1.in
> >> @@ -451,6 +451,11 @@ domain. See the corresponding option of the I 
> >> subcommand.
> >>  Send the specified  file instead of the file used on creation of 
> >> the
> >>  domain.
> >>  
> >> +=item B<-c> I
> >> +
> >> +Migrate the domain to the specified  on the target host. 
> >> Specifying
> >> +an empty string for  will use the default cpupool on .
> >> +
> > 
> > Is this the wisest way to extend the interface?  We already have -C to
> > specify new configuration, and only have 26*2 short options to use.
> > 
> > What if the user could supply a xl.cfg snippet on the command line to be
> > merged over the existing configuration?  That would allow any parameter
> > to be changed, rather than just the cpupool.
> 
> I'm not opposed to that suggestion, but I believe the cpupool is rather
> special: it is more like a migration target specification than a domain
> parameter.
> 
> In case you are mostly concerned by burning another short option letter
> I can switch to using --cpupool= syntax.
> 
> And TBH: I consider the -C option being quite dangerous. While I can
> understand why it is present it is still a rather hacky approach for a
> general problem. Same applies to the capability to modify random
> settings of the domain config.

The -C option is rather dangerous. It disregards all guests states. You
will likely to lose your mac address unless you have set the same mac
address in the override file. Same goes for other states libxl might
have saved during domain creation.

Wei.

> 
> Juergen
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 128324: tolerable FAIL - PUSHED

2018-10-03 Thread osstest service owner
flight 128324 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/128324/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 128291
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 128291
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 128291
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 128291
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 128291
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 qemuu3892f1f1a963e59dfe012cd9d461d33b2986fa3b
baseline version:
 qemuua2ef4d9e95400cd387ab4ae19a317741e458fb07

Last test of basis   128291  2018-10-01 18:07:26 Z1 days
Failing since128311  2018-10-02 10:40:33 Z0 days2 attempts
Testing same since   128324  2018-10-02 19:37:21 Z0 days1 attempts


People who touched revisions under test:
  Alberto Garcia 
  David Gibson 
  Fam Zheng 
  Kevin Wolf 
  Leonid Bloch 
  Max Filippov 
  Max Reitz 
  Peter Maydell 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 

Re: [Xen-devel] [PATCH 2/2] xl: add target cpupool parameter to xl migrate

2018-10-03 Thread Wei Liu
On Tue, Oct 02, 2018 at 04:19:34PM +0200, Juergen Gross wrote:
> Add an option to specify the cpupool on the target machine when doing
> a migration of a domain. Currently a domain is always migrated to the
> cpupool with the same name as on the source machine.
> 
> Specifying "-c " will migrate the domain to the specified
> cpupool on the target machine. Specifying an empty string for 
> will use the default cpupool (normally "Pool-0") on the target machine.

I think this is a worthwhile addition to xl.

> diff --git a/tools/xl/xl_saverestore.c b/tools/xl/xl_saverestore.c
> index 9afeadeeb2..2583b6c800 100644
> --- a/tools/xl/xl_saverestore.c
> +++ b/tools/xl/xl_saverestore.c
> @@ -33,6 +33,7 @@
>  
>  void save_domain_core_begin(uint32_t domid,
>  const char *override_config_file,
> +const char *override_cpupool,
>  uint8_t **config_data_r,
>  int *config_len_r)
>  {
> @@ -63,6 +64,13 @@ void save_domain_core_begin(uint32_t domid,
>  }
>  }
>  
> +if (override_cpupool) {
> +free(d_config.c_info.pool_name);
> +d_config.c_info.pool_name = NULL;
> +if (override_cpupool[0])
> +d_config.c_info.pool_name = strdup(override_cpupool);

xstrdup please.

> +}
> +
>  config_c = libxl_domain_config_to_json(ctx, &d_config);
>  if (!config_c) {
>  fprintf(stderr, "unable to convert config file to JSON\n");
> @@ -126,7 +134,7 @@ static int save_domain(uint32_t domid, const char 
> *filename, int checkpoint,
>  uint8_t *config_data;
>  int config_len;
>  
> -save_domain_core_begin(domid, override_config_file,
> +save_domain_core_begin(domid, override_config_file, NULL,
> &config_data, &config_len);
>  
>  if (!config_len) {
> -- 
> 2.16.4
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] libxl: modify domain config when moving domain to another cpupool

2018-10-03 Thread Wei Liu
On Tue, Oct 02, 2018 at 04:19:33PM +0200, Juergen Gross wrote:
> Today the domain config info contains the cpupool name the domain was
> started in only if the cpupool was specified at domain creation. Moving
> the domain to another cpupool later won't change that information.
> 
> Correct that by modifying the domain config accordingly.
> 
> Signed-off-by: Juergen Gross 
> ---
>  tools/libxl/libxl_cpupool.c | 28 +---
>  1 file changed, 25 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libxl/libxl_cpupool.c b/tools/libxl/libxl_cpupool.c
> index 85b06882db..92cf29bc6b 100644
> --- a/tools/libxl/libxl_cpupool.c
> +++ b/tools/libxl/libxl_cpupool.c
> @@ -430,17 +430,39 @@ out:
>  int libxl_cpupool_movedomain(libxl_ctx *ctx, uint32_t poolid, uint32_t domid)
>  {
>  GC_INIT(ctx);
> +libxl_domain_config d_config;
> +libxl__domain_userdata_lock *lock = NULL;
>  int rc;
>  
> +libxl_domain_config_init(&d_config);
> +
>  rc = xc_cpupool_movedomain(ctx->xch, poolid, domid);
>  if (rc) {
>  LOGEVD(ERROR, rc, domid, "Error moving domain to cpupool");
> -GC_FREE;
> -return ERROR_FAIL;
> +rc = ERROR_FAIL;
> +goto out;
> +}
> +
> +lock = libxl__lock_domain_userdata(gc, domid);
> +if (!lock) {
> +rc = ERROR_LOCK_FAIL;
> +goto out;
>  }

It is better to move the lock before calling xc_cpupool_movedomain to
avoid races when there are multiple callers of libxl_cpupool_movedomain.

Wei.

>  
> +rc = libxl__get_domain_configuration(gc, domid, &d_config);
> +if (rc)
> +goto out;
> +
> +free(d_config.c_info.pool_name);
> +d_config.c_info.pool_name = libxl_cpupoolid_to_name(ctx, poolid);
> +
> +rc = libxl__set_domain_configuration(gc, domid, &d_config);
> +
> +out:
> +if (lock) libxl__unlock_domain_userdata(lock);
> +libxl_domain_config_dispose(&d_config);
>  GC_FREE;
> -return 0;
> +return rc;
>  }
>  
>  /*
> -- 
> 2.16.4
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 0/4] tools/xen-hvmctx: drop bogus casts

2018-10-03 Thread Wei Liu
On Tue, Oct 02, 2018 at 05:44:15AM -0600, Jan Beulich wrote:
> ... and try to improve readability of some of the output.
> 
> 1: drop bogus casts from dump_cpu()
> 2: drop bogus casts from dump_lapic_regs()
> 3: drop bogus casts from dump_hpet()
> 4: drop bogus casts from dump_mtrr()

Acked-by: Wei Liu 

Please commit these patches yourself.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel