All,
I am pleased to announce the release of Xen 4.7.4. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.7
(tag RELEASE-4.7.4) or from the XenProject download page
https://xenproject.org/downloads/xen-archives/xen-proj
All,
I am pleased to announce the release of Xen 4.9.1. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.9
(tag RELEASE-4.9.1) or from the XenProject download page
http://www.xenproject.org/downloads/xen-archives/xen-pr
>>> On 22.11.17 at 15:40, wrote:
> On 22/11/17 15:05, Jan Beulich wrote:
>> Jürgen, Boris,
>>
>> am I trying something that's not allowed, but selectable via Kconfig?
>> On system with multiple IO-APICs (I assume that's what triggers the
>> p
Jürgen, Boris,
am I trying something that's not allowed, but selectable via Kconfig?
On system with multiple IO-APICs (I assume that's what triggers the
problem) I get
Kernel panic - not syncing: Max apic_id exceeded!
CPU: 0 PID: 0 Comm: swapper Not tainted 4.14.1-2017-11-21-xen0 #6
Hardware nam
_vcpu_destroy()) and the
> intention to limit the performance impact (otherwise it could also go
> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
>
> Reported-by: Igor Druzhinin
> Signed-off-by: Jan Beulich
I'm sorry, Julien, I did forget to Cc you
(otherwise it could also go
into rcu_do_batch(), paralleling the use in do_tasklet_work()).
Reported-by: Igor Druzhinin
Signed-off-by: Jan Beulich
---
v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -794,6 +794,14 @@ static void
>>> On 21.11.17 at 19:19, wrote:
> xentrace I would argue for security support; I've asked customers to
> send me xentrace data as part of analysis before. I also know enough
> about it that I'm reasonably confident the risk of an attack vector is
> pretty low.
Knowing pretty little about xentra
>>> On 21.11.17 at 19:02, wrote:
> On 11/21/2017 08:39 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, wrote:
>>> +### x86/Nested PV
>>> +
>>> +Status, x86 HVM: Tech Preview
>>> +
>>> +This means running a Xen
>>> On 21.11.17 at 18:35, wrote:
> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>> +### QEMU backend hotplugging for xl
>>> +
>>> +Status: Supported
>>
>> Wouldn't this more appropriately be
>>
>> ### QEMU backend hotplu
>>> On 21.11.17 at 18:20, wrote:
> On 11/21/2017 11:41 AM, Jan Beulich wrote:
>>>>> On 21.11.17 at 11:56, wrote:
>>> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>>>>> On 13.11.17 at 16:41, wrote:
>>>>> +### PV USB suppor
>>> On 23.10.17 at 11:05, wrote:
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -21,6 +21,7 @@
>
> #include
> #include
> +#include
> #include
With this addition moved up a line to result in a properly sorted set
Reviewed-by: Jan Beulic
>>> On 21.11.17 at 18:00, wrote:
> On Tue, 2017-11-21 at 08:29 -0700, Jan Beulich wrote:
>> > > > On 21.11.17 at 15:07, wrote:
>> >
>> > On 21/11/17 13:22, Jan Beulich wrote:
>> > > > > > On 09.11.17 at 15:49, wrote:
>&g
>>> On 23.10.17 at 11:05, wrote:
First of all, instead of xen: please consider using something more
specific, like x86/hvm:.
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -368,6 +368,22 @@ struct xen_dm_op_remote_shutdown {
> /* (O
>>> On 23.10.17 at 11:05, wrote:
> Make it global in preparation to be called by a new dmop.
>
> Signed-off-by: Ross Lagerwall
>
> ---
> Reviewed-by: Paul Durrant
Misplaced tag.
I'd prefer if the function was made non-static in the patch which
needs it so,
>>> On 21.11.17 at 15:07, wrote:
> On 21/11/17 13:22, Jan Beulich wrote:
>>>>> On 09.11.17 at 15:49, wrote:
>>> See the code comment being added for why we need this.
>>>
>>> Reported-by: Igor Druzhinin
>>> Signed-off-by: Jan Beul
>>> On 06.11.17 at 16:04, wrote:
> On 11/06/2017 11:59 AM, Jan Beulich wrote:
>>>>> On 16.10.17 at 14:42, wrote:
>>>>>> On 16.10.17 at 14:37, wrote:
>>>> On 16/10/17 13:32, Jan Beulich wrote:
>>>>> Since the emulator ac
>>> On 09.11.17 at 15:49, wrote:
> See the code comment being added for why we need this.
>
> Reported-by: Igor Druzhinin
> Signed-off-by: Jan Beulich
I realize we aren't settled yet on where to put the sync call. The
discussion appears to have stalled, though. Just
>>> On 21.11.17 at 13:39, wrote:
> What about something like this?
>
> ### IOMMU
>
> Status, AMD IOMMU: Supported
> Status, Intel VT-d: Supported
> Status, ARM SMMUv1: Supported
> Status, ARM SMMUv2: Supported
Fine with me, as it makes things explicit.
Jan
___
>>> On 21.11.17 at 13:24, wrote:
>> On Nov 21, 2017, at 11:35 AM, Jan Beulich
>> Much depends on whether you think "guest" == "DomU". To me
>> Dom0 is a guest, too.
>
> That’s not how I’ve ever understood those terms.
>
> A guest
>>> On 21.11.17 at 12:48, wrote:
> On 21/11/17 12:27, Jan Beulich wrote:
>>>>> On 21.11.17 at 12:06, wrote:
>>> The "special pages" for PVH guests include the frames for console and
>>> Xenstore ring buffers. Those have to be marked
>>> On 21.11.17 at 11:56, wrote:
> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, wrote:
>>> +### PV USB support for xl
>>> +
>>> +Status: Supported
>>> +
>>> +### PV 9pfs support for xl
>&
>>> On 21.11.17 at 11:45, wrote:
> On 11/21/2017 08:11 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, wrote:
>>> +### ARM/SMMUv1
>>> +
>>> +Status: Supported
>>> +
>>> +### ARM/SMMUv2
>>> +
>>> +
>>> On 21.11.17 at 11:42, wrote:
> On 11/21/2017 08:09 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, wrote:
>>> +### x86/PVH guest
>>> +
>>> +Status: Supported
>>> +
>>> +PVH is a next-generation paravirtualized
>>> On 21.11.17 at 11:36, wrote:
> On 11/21/2017 08:03 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, wrote:
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
>&g
>>> On 21.11.17 at 12:06, wrote:
> The "special pages" for PVH guests include the frames for console and
> Xenstore ring buffers. Those have to be marked as "Reserved" in the
> guest's E820 map, as otherwise conflicts might arise later e.g. when
> hotplugging memory into the guest.
Afaict this di
>>> On 13.11.17 at 16:41, wrote:
> +### Virtual CPUs
> +
> +Limit, x86 PV: 8192
> +Limit-security, x86 PV: 32
> +Limit, x86 HVM: 128
> +Limit-security, x86 HVM: 32
Personally I consider the "Limit-security" numbers too low here, but
I have no proof that higher numbers will work _i
ending for ARM and (b) exclude PVH (assuming
that its absence means non-existing code).
> +Only systems using IOMMUs will be supported.
> +
> +Not compatible with migration, altp2m, introspection, memory sharing, or
> memory paging.
And PoD, iirc.
With these adjustments (or substantial
>>> On 13.11.17 at 16:41, wrote:
> Signed-off-by: George Dunlap
Wouldn't PoD belong here too? With that added as supported on x86
HVM
Acked-by: Jan Beulich
Jan
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
>>> On 13.11.17 at 16:41, wrote:
> With the exception of driver domains, which depend on PCI passthrough,
> and will be introduced later.
>
> Signed-off-by: George Dunlap
Shouldn't we also explicitly exclude tool stack disaggregation here,
with reference to XSA-77?
Jan
__
>>> On 13.11.17 at 16:41, wrote:
> +### x86/vMCE
> +
> +Status: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests
Acked-by: Jan Beulich
perhaps with the A converted to lower case.
Jan
___
Xen-deve
>>> On 13.11.17 at 16:41, wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -152,6 +152,35 @@ Output of information in machine-parseable JSON format
>
> Status: Supported, Security support external
>
> +## Debugging, analysis, and crash post-mortem
> +
> +### gdbsx
> +
> +Status, x86:
>>> On 21.11.17 at 09:13, wrote:
> On 21/11/17 08:50, Jan Beulich wrote:
>>>>> On 20.11.17 at 19:28, wrote:
>>> On 20/11/17 17:14, Jan Beulich wrote:
>>>>>>> On 20.11.17 at 16:24, wrote:
>>>>> So without my patch the
>>> On 13.11.17 at 16:41, wrote:
> +### x86/Nested PV
> +
> +Status, x86 HVM: Tech Preview
> +
> +This means running a Xen hypervisor inside an HVM domain,
> +with support for PV L2 guests only
> +(i.e., hardware virtualization extensions not provided
> +to the guest).
> +
> +This works, but h
>>> On 13.11.17 at 16:41, wrote:
> +### PV USB support for xl
> +
> +Status: Supported
> +
> +### PV 9pfs support for xl
> +
> +Status: Tech Preview
Why are these two being called out, but xl support for other device
types isn't?
> +### QEMU backend hotplugging for xl
> +
> +Status:
>>> On 13.11.17 at 16:41, wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -195,6 +195,27 @@ on embedded platforms.
>
> Enables NUMA aware scheduling in Xen
>
> +## Scalability
> +
> +### 1GB/2MB super page support
> +
> +Status, x86 HVM/PVH: : Supported
On top of what you and Julien ha
>>> On 13.11.17 at 16:41, wrote:
> +### ARM/SMMUv1
> +
> +Status: Supported
> +
> +### ARM/SMMUv2
> +
> +Status: Supported
Do these belong here, when IOMMU isn't part of the corresponding
x86 patch?
Jan
___
Xen-devel mailing list
Xen-devel@li
>>> On 13.11.17 at 16:41, wrote:
> +### Host ACPI (via Domain 0)
> +
> +Status, x86 PV: Supported
> +Status, x86 PVH: Tech preview
Are we this far already? Preview implies functional completeness,
but I'm not sure about all ACPI related parts actually having been
implemented (and see also
d
Is this a proper feature in the context we're talking about? To me
it's meaningful in guest OS context only. I also wouldn't really
consider it "core", but placement within the series clearly is a minor
aspect.
I'd prefer this to be dropped altogether as a fea
>>> On 20.11.17 at 19:28, wrote:
> On 20/11/17 17:14, Jan Beulich wrote:
>>>>> On 20.11.17 at 16:24, wrote:
>>> On 20/11/17 15:20, Jan Beulich wrote:
>>>>>>> On 20.11.17 at 15:14, wrote:
>>>>> On 20/11/17 14:56,
>>> On 20.11.17 at 17:59, wrote:
> On 11/20/2017 11:43 AM, Jan Beulich wrote:
>>>>> On 20.11.17 at 17:28, wrote:
>>> On 11/20/2017 11:26 AM, Jan Beulich wrote:
>>>>>>> On 20.11.17 at 17:14, wrote:
>>>>> What could cause g
g "semantic newlines" [1], to make
> changes easier.
>
> Begin with the basic framework.
>
> Signed-off-by: Ian Jackson
> Signed-off-by: George Dunlap
Acked-by: Jan Beulich
despite ...
> +We also provide security support for Xen-related code in Linux,
> +wh
>>> On 20.11.17 at 17:28, wrote:
> On 11/20/2017 11:26 AM, Jan Beulich wrote:
>>>>> On 20.11.17 at 17:14, wrote:
>>> What could cause grub2 to fail to find space for the pointer in the
>>> first page? Will we ever have anything in EBDA (which is one
>>> On 20.11.17 at 17:14, wrote:
> What could cause grub2 to fail to find space for the pointer in the
> first page? Will we ever have anything in EBDA (which is one of the
> possible RSDP locations)?
Well, the EBDA (see the B in its name) is again something that's
meaningless without there being
>>> On 20.11.17 at 16:24, wrote:
> On 20/11/17 15:20, Jan Beulich wrote:
>>>>> On 20.11.17 at 15:14, wrote:
>>> On 20/11/17 14:56, Boris Ostrovsky wrote:
>>>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>>>>>>>> On 20.11.17 a
>>> On 20.11.17 at 15:10, wrote:
> On 17/11/17 12:10, Jan Beulich wrote:
>>>>> On 16.11.17 at 20:15, wrote:
>>> Doing so amounts to silent state corruption, and must be avoided.
>> I think a little more explanation is needed on why the current code
>>> On 20.11.17 at 15:14, wrote:
> On 20/11/17 14:56, Boris Ostrovsky wrote:
>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>>>>>> On 20.11.17 at 12:20, wrote:
>>>> Which restriction? I'm loading the RSDP table to its architectural
>>&g
>>> On 20.11.17 at 14:56, wrote:
> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>>>>> On 20.11.17 at 12:20, wrote:
>>> Which restriction? I'm loading the RSDP table to its architectural
>>> correct addres if possible, otherwise it will be loaded t
>>> On 20.11.17 at 14:19, wrote:
> Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
>>> On 20.11.17 at 12:20, wrote:
> Which restriction? I'm loading the RSDP table to its architectural
> correct addres if possible, otherwise it will be loaded to the same
> address as without my patch. So I'm not adding a restriction, but
> removing one.
What is "architecturally correct" in PVH
>>> On 20.11.17 at 10:35, wrote:
> On Ma, 2017-10-24 at 13:19 +0300, Petre Pircalabu wrote:
>> From: Razvan Cojocaru
>>
>> For the default EPT view we have xc_set_mem_access_multi(), which
>> is able to set an array of pages to an array of access rights with
>> a single hypercall. However, this f
>>> On 17.11.17 at 12:47, wrote:
> Make sure the HVM mmio area (especially console and Xenstore pages) is
> marked as "reserved" in the guest's E820 map, as otherwise conflicts
> might arise later, e.g. when hotplugging memory into the guest.
This is very certainly wrong. Have you looked at a cou
load entry 20/0
>
> Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1330,6 +1330,7 @@ static int hvm_save_cpu_msrs(struct domain *d,
> hvm_domain_context_t *h)
>
> for_each_vcpu ( d, v )
>
nion.
Oops.
> The reason that these bugs have gone unnoticed for so long is that the only
> MSRs passed like this for PV guests are the AMD DBGEXT MSRs, which only exist
> in fairly modern hardware, and whose use doesn't appear to be implemented in
> any contemporary PV gues
>>> On 16.11.17 at 20:15, wrote:
> Doing so amounts to silent state corruption, and must be avoided.
I think a little more explanation is needed on why the current code
is insufficient. Note specifically this
for ( i = 0; !err && i < ctxt->count; ++i )
{
switch ( ctxt->msr[i].ind
>>> On 16.11.17 at 21:01, wrote:
> Hello,
> Looking at
> https://xenbits.xen.org/xsa/advisory-243.html,
> I cannot find the second patch for xen 4.8, xsa243-4.8-2.patch.
> The text of the advisory leads me to believe that it should be there, so
> it seems to be missing.
The text has xsa243-{4.8-1
>>> On 16.11.17 at 13:30, wrote:
> On Thursday, 16 November 2017 8:30:39 PM AEDT Jan Beulich wrote:
>> >>> On 15.11.17 at 23:48, wrote:
>> > I am having trouble applying the patch 3 from XSA240 update 5 for xen
>> > stable 4.8 and 4.9
>> >
>>> On 15.11.17 at 23:48, wrote:
> Hi,
>
> I am having trouble applying the patch 3 from XSA240 update 5 for xen
> stable 4.8 and 4.9
> xsa240 0003 contains:
>
> CONFIG_PV_LINEAR_PT
>
> from:
>
> x86/mm: Make PV linear pagetables optional
> https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdi
ns.
>
> The second change is adding a missing break that would have potentially
> enabled #VE for the current domain even if it had intended to enable it
> for another one (not a supported functionality).
Thanks, much better.
> Signed-off-by: Adrian Pop
> Reviewed-by: An
>>> On 14.11.17 at 16:11, wrote:
> rcu_lock_current_domain is called at the beginning of do_altp2m_op, but
> the altp2m_vcpu_enable_notify subop handler might skip calling
> rcu_unlock_domain, possibly hanging the domain altogether.
I fully agree with the change, but the description needs improve
ed be
> checked after the lock is obtained.
>
> Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
> map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this patch will
> check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is obtained,
> for the corresponding L2/L3 entry.
>
> Signed-off-by: Min He
> Signed-off-by: Yi Zhang
> Signed-off-by: Yu Zhang
Revie
>>> On 13.11.17 at 11:34, wrote:
> Our debug showed the concerned page->count_info was already(and
> unexpectedly)
> cleared in free_xenheap_pages(), and the call trace should be like this:
>
> free_xenheap_pages()
> ^
> |
> free_xen_pagetable()
> ^
> |
> map_pages_to_xen()
>
>>> On 13.11.17 at 11:33, wrote:
>> From: Joao Martins [mailto:joao.m.mart...@oracle.com]
>> Sent: 10 November 2017 19:35
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -96,6 +96,11 @@ unsigned int xenvif_hash_cache_size =
>> XENVIF_HASH_CACHE_SIZE_DEFA
wouldn't it then be better to rename the doc to
pvh.markdown at the same time? Either way
Acked-by: Jan Beulich
Jan
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
>>> On 10.11.17 at 15:46, wrote:
> On 10/11/17 10:30, Jan Beulich wrote:
>>>>> On 10.11.17 at 09:41, wrote:
>>>2. Drop v->is_running check inside vmx_ctxt_switch_from() making
>>>vmx_vmcs_reload() unconditional.
>>
>>
>>> On 10.11.17 at 15:02, wrote:
> On 11/10/2017 5:57 PM, Jan Beulich wrote:
>>>>> On 10.11.17 at 08:18, wrote:
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -5097,6 +5097,17 @@ int modify_xen_mappings(
>>> On 10.11.17 at 15:05, wrote:
> On 11/10/2017 5:49 PM, Jan Beulich wrote:
>> I'm not certain this is important enough a fix to consider for 4.10,
>> and you seem to think it's good enough if this gets applied only
>> after the tree would be branched, as
oftware.intel.com/sites/default/files/managed/c5/15/\
> architecture-instruction-set-extensions-programming-reference.pdf
>
> Signed-off-by: Yang Zhong
Non-toolstack parts
Acked-by: Jan Beulich
(which you could have picked up from v2 if you hadn't been rushing v3)
Jan
___
>>> On 10.11.17 at 11:36, wrote:
> The new cpu features in intel icelake: AVX512VBMI2/GFNI/VAES/
> AVX512VNNI/AVX512BITALG/VPCLMULQDQ.
>
>
> v2: adjust the patches sequence from Jan
I'm sorry, but please be a little more patient with sending new versions.
Allow for at least a couple of days, pr
oftware.intel.com/sites/default/files/managed/c5/15/\
> architecture-instruction-set-extensions-programming-reference.pdf
>
> Signed-off-by: Yang Zhong
Properly placed last in the series, the non-toolstack parts here
Acked-by: Jan Beulich
Jan
___
Xen-deve
>>> On 10.11.17 at 09:41, wrote:
> On Thu, 2017-11-09 at 07:49 -0700, Jan Beulich wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -479,7 +479,13 @@ static void vmx_vcpu_destroy(struct vcpu
>> * we should
>>> On 10.11.17 at 10:50, wrote:
> On 10/11/17 10:33, Roger Pau Monné wrote:
>> On Sat, Nov 04, 2017 at 11:14:35PM +, osstest service owner wrote:
>>> flight 11 xen-unstable real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/11/
>>>
>>> Regressions :-(
>>>
>>> Tests whic
>>> On 10.11.17 at 08:18, wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5097,6 +5097,17 @@ int modify_xen_mappings(unsigned long s, unsigned long
> e, unsigned int nf)
> */
> if ( (nf & _PAGE_PRESENT) || ((v != e) && (l1_table_offset(v) !=
> 0)) )
>
>>> On 10.11.17 at 10:40, wrote:
>> Anthony PERARD
>> Sent: 09 November 2017 17:50
>> The problem is that QEMU 4.10 have a lock on the disk image. When
>> booting an HVM guest with a qdisk backend, the disk is open twice, but
>> can only be locked once, so when the pv disk is been initialized, the
{
> +if ( locking )
> +spin_unlock(&map_pgdir_lock);
> +continue;
> + }
> +
> ol3e = *pl3e;
Same here - move the if() below here and use ol3e in there.
With that
Reviewed-by: Jan Beulich
I'm not certain this is imp
>>> On 10.11.17 at 10:36, wrote:
> Yang Zhong (4):
> x86/cpuid: Enable new SSE/AVX/AVX512 cpu features
The ordering is wrong - as said before, these ...
> x86emul: Support GFNI insns
> x86emul: Support vpclmulqdq
> x86emul: Support vaes insns
... are supposed to be prereqs of the actual
>>> On 09.11.17 at 15:16, wrote:
> On Thu, 2017-11-09 at 06:08 -0700, Jan Beulich wrote:
>> Tasklets already take care of this by
>> calling sync_local_execstate() before calling the handler. But
>> for softirqs this isn't really an option; I'm surpris
>>> On 09.11.17 at 16:48, wrote:
> On 09/11/17 15:47, Jan Beulich wrote:
>>>>> On 09.11.17 at 16:39, wrote:
>>> What I meant is you would replace the 4 occurrences by
>>> mfn_to_page(_mfn(...)). If you are happy with that, then fine.
>>
>
>>> On 09.11.17 at 16:37, wrote:
> These tables are pointed to from FADT. Adding them will
> result in duplicate entries in the guest's tables.
>
> Signed-off-by: Boris Ostrovsky
Reviewed-by: Jan Beulich
___
Xen-de
>>> On 09.11.17 at 16:39, wrote:
> On 09/11/17 15:36, Jan Beulich wrote:
>>>>> On 09.11.17 at 16:20, wrote:
>>> I had a look at the files that needs to convert. It seems there are few
>>> files with page_to_mfn/mfn_to_page re-defined but no ca
>>> On 09.11.17 at 16:20, wrote:
> I had a look at the files that needs to convert. It seems there are few
> files with page_to_mfn/mfn_to_page re-defined but no callers:
> - arch/x86/mm/hap/nested_hap.c
> - arch/x86/mm/p2m-pt.c
> - arch/x86/pv/traps.c
> - arch/x86/pv/mm.c
>>> On 09.11.17 at 16:07, wrote:
> On Thu, Nov 09, 2017 at 06:18:21AM -0700, Jan Beulich wrote:
>> >>> On 09.11.17 at 12:31, wrote:
>> > On Thu, Nov 09, 2017 at 03:49:23PM +0530, Bhupinder Thakur wrote:
>> >>
See the code comment being added for why we need this.
Reported-by: Igor Druzhinin
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -479,7 +479,13 @@ static void vmx_vcpu_destroy(struct vcpu
* we should disable PML manually here. Note that
>>> On 09.11.17 at 15:42, wrote:
> Hi,
>
> On 09/11/17 08:55, Jan Beulich wrote:
>>>>> On 08.11.17 at 20:46, wrote:
>>> Do it once at domain creation (hpet_init).
>>>
>>> Sleep -> Resume cycles will end up crashing an HVM g
>>> On 09.11.17 at 15:16, wrote:
> Ah, yes, my bad! What if I take vcpu_migrate() out of the above exec-
> trace (which is what I wanted to do in my email already)?
>
> pCPU1
> =
> current == vCPU1
> context_switch(next == idle)
> !! __context_switch() is skipped
> anything_that_uses_or_touch
>>> On 09.11.17 at 12:31, wrote:
> On Thu, Nov 09, 2017 at 03:49:23PM +0530, Bhupinder Thakur wrote:
>> +static int ns16550_init_dt(struct ns16550 *uart,
>> + const struct dt_device_node *dev)
>> +{
>> +return -EINVAL;
>> +}
>> +#endif
>> +
>> +#ifdef CONFIG_ACPI
>> +
>>> On 09.11.17 at 12:01, wrote:
> Anyway, as I was trying to explain replaying to Jan, although in this
> situation the issue manifests as a consequence of vCPU migration, I
> think it is indeed more general, as in, without even the need to
> consider a second pCPU:
>
> pCPU1
> =
> current =
>>> On 09.11.17 at 11:36, wrote:
> Well, I'm afraid I only see two solutions:
> 1) we get rid of lazy context switch;
> 2) whatever it is that is happening at point c above, it needs to be
>aware that we use lazy context switch, and make sure to sync the
>context before playing with or a
>>> On 09.11.17 at 11:24, wrote:
> On 11/9/2017 5:19 PM, Jan Beulich wrote:
>> 2) Is your change actually enough to take care of all forms of the
>> race you describe? In particular, isn't it necessary to re-check PSE
>> after having taken the lock, in case ano
>>> On 09.11.17 at 10:54, wrote:
> On Tue, 2017-11-07 at 14:24 +, Igor Druzhinin wrote:
>> Perhaps I should improve my diagram:
>>
>> pCPU1: vCPUx of domain X -> migrate to pCPU2 -> switch to idle
>> context
>> -> RCU callbacks -> vcpu_destroy(vCPUy of domain Y) ->
>> vmx_vcpu_disable_pml() -
>>> On 07.11.17 at 16:52, wrote:
> There is one things that I'm worrying about with this approach:
>
> At this place we just sync the idle context because we know that we are
> going to deal with VMCS later. But what about other potential cases
> (perhaps some softirqs) in which we are accessing
>>> On 09.11.17 at 16:29, wrote:
> In map_pages_to_xen(), a L2 page table entry may be reset to point to
> a superpage, and its corresponding L1 page table need be freed in such
> scenario, when these L1 page table entries are mapping to consecutive
> page frames and having the same mapping flags.
>>> On 09.11.17 at 16:29, wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4844,9 +4844,10 @@ int map_pages_to_xen(
> {
> unsigned long base_mfn;
>
> -pl1e = l2e_to_l1e(*pl2e);
> if ( locking )
>
>>> On 08.11.17 at 21:19, wrote:
> These tables are pointed to from FADT. Adding them will
> result in duplicate entries in the guest's tables.
Oh, indeed. Just one small adjustment request:
> +static bool __init pvh_acpi_table_in_xsdt(const char *sig)
> +{
> +/*
> + * DSDT and FACS are
ld not
normally be marked "inline" explicitly - it should be the compiler
to make that decision.
As doing the adjustment it relatively simple, I wouldn't mind
doing so while committing, saving another round trip. With
that adjustment (or at the very least with the "inline&quo
>>> On 09.11.17 at 00:06, wrote:
> --- a/drivers/xen/xen-pciback/pci_stub.c
> +++ b/drivers/xen/xen-pciback/pci_stub.c
> @@ -244,6 +244,91 @@ struct pci_dev *pcistub_get_pci_dev(struct
> xen_pcibk_device *pdev,
> return found_dev;
> }
>
> +struct pcistub_args {
> + struct pci_dev *de
>>> On 08.11.17 at 16:44, wrote:
> On 11/7/2017 8:40 AM, Jan Beulich wrote:
>>>>> On 06.11.17 at 18:48, wrote:
>>> --- a/Documentation/ABI/testing/sysfs-driver-pciback
>>> +++ b/Documentation/ABI/testing/sysfs-driver-pciback
>>> @@ -11,3 +11,
>>> On 09.11.17 at 02:44, wrote:
> On 11/07/17 01:37 -0700, Jan Beulich wrote:
>> I don't believe a crash is the expected outcome here.
>>
>
> This test case injects two errors to the same dom0 page. During the
> first injection, offline_page() is called
>>> On 08.11.17 at 13:45, wrote:
> On 08/11/17 13:31, Jan Beulich wrote:
>>>>> On 08.11.17 at 12:55, wrote:
>>> On 08/11/17 12:18, Jan Beulich wrote:
>>>>>>> On 08.11.17 at 10:07, wrote:
>>>>> In case we are booted via
>>> On 07.11.17 at 13:31, wrote:
> ENOSYS should only be used by unimplemented top-level syscalls. Use
> EOPNOTSUPP instead.
>
> Signed-off-by: Roger Pau Monné
> Reported-by: Jan Beulich
Btw I've taken the liberty to make the title say "
1 - 100 of 13762 matches
Mail list logo