entry->edx &= kvm_cpuid_7_0_edx_x86_features;
> entry->edx &= get_scattered_cpuid_leaf(7, 0, CPUID_EDX);
>
Reviewed-by: David Hildenbrand
--
Thanks,
David
On 24.08.2017 12:12, Paolo Bonzini wrote:
> Move it to struct kvm_arch_vcpu, replacing guest_pkru_valid with a
> simple comparison against the host value of the register. The write of
> PKRU in addition can be skipped if the guest has not enabled the feature.
> Once we do this, we need not test
On 02.08.2017 09:38, Paolo Bonzini wrote:
> On 01/08/2017 19:37, Radim Krčmář wrote:
>>
>> Do you it's less ugly than the other two options?
>
> It's awesome, but it's a non-trivial project of its own. :)
>
> Paolo
>
That would be a perfect cleanup and I'll be happy to review it ;)
... until
>
> +static bool valid_ept_address(struct kvm_vcpu *vcpu, u64 address)
> +{
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
> + u64 mask = address & 0x7;
> + int maxphyaddr = cpuid_maxphyaddr(vcpu);
> +
> + /* Check for memory type validity */
> + switch (mask) {
> + case 0:
>
On 02.08.2017 22:41, Radim Krčmář wrote:
> This series does the generalization that we've spoken about recently
> Might be a good time to change the function names as well.
>
>
> Radim Krčmář (2):
> KVM: x86: generalize guest_cpuid_has_ helpers
> KVM: x86: use general helpers for some cpuid
>> Minor nit: Can't you directly do
>>
>> kunmap(page);
>> nested_release_page_clean(page);
>>
>> at this point?
>>
>> We can fix this up later.
>
> You actually can do simply kvm_vcpu_read_guest_page(vcpu,
> vmcs12->eptp_list_address >> PAGE_SHIFT, , index * 8, 8).
>
Fascinating how nested is
ni
This looks like the right thing to do!
(and as mentioned, also properly marks the page as dirty)
Reviewed-by: David Hildenbrand
--
Thanks,
David
On 27.07.2017 15:20, Paolo Bonzini wrote:
> Expose the "Enable INVPCID" secondary execution control to the guest
> and properly reflect the exit reason.
>
> In addition, before this patch the guest was always running with
> INVPCID enabled, causing pcid.flat's "Test on INVPCID when disabled"
>
Hi,
I can see that we allocate under x86 for each KVM memslot
"sizeof(unsigned short) * npages" for page_track.
So 1 byte for each 4096 bytes of memory slot size. This doesn't sound a
lot, but if we have very big memory slots (e.g. for NVDIMM), this can
quickly get out of hand. E.g. for 4TB, we
struct vcpu_vmx *vmx = to_vmx(vcpu);
> +#ifdef CONFIG_X86_64
> int cpu = raw_smp_processor_id();
> +#endif
> int i;
>
> if (vmx->host_state.loaded)
>
Can't we get rid of the variable and just do it inline?
Anyhow, fixes the problem
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
emulate_instruction(vcpu, EMULTYPE_TRAP_UD);
> + if (er == EMULATE_USER_EXIT)
> + return 0;
> + if (er != EMULATE_DONE)
> + kvm_queue_exception(vcpu, UD_VECTOR);
> + return 1;
I would now actually prefer
if (er == EMULATE_DONE)
return 1 ...
Anyhow,
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 1eb495e..a55ecef 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -146,6 +146,9 @@ bool __read_mostly enable_vmware_backdoor = false;
> module_param(enable_vmware_backdoor, bool, S_IRUGO);
>
On 04.04.2018 19:12, Paolo Bonzini wrote:
> On 04/04/2018 13:54, David Hildenbrand wrote:
>>> +{
>>> + enum emulation_result er;
>>> +
>>> + er = emulate_instruction(vcpu, EMULTYPE_TRAP_UD);
>>> + if (er == EMULATE_USER_EXIT)
>>
On 05.04.2018 12:48, Pankaj Gupta wrote:
> This patch adds virtio-pmem Qemu device.
>
> This device configures memory address range information with file
> backend type. It acts like persistent memory device for KVM guest.
> It presents the memory address range to virtio-pmem driver over
>
>>
>> So right now you're just using some memdev for testing.
>
> yes.
>
>>
>> I assume that the memory region we will provide to the guest will be a
>> simple memory mapped raw file. Dirty tracking (using the kvm slot) will
>> be used to detect which blocks actually changed and have to be
Let's mark all offline pages with PG_offline. We'll continue to mark
them reserved.
Signed-off-by: David Hildenbrand
---
drivers/hv/hv_balloon.c | 2 +-
mm/memory_hotplug.c | 10 ++
mm/page_alloc.c | 5 -
3 files changed, 11 insertions(+), 6 deletions(-)
diff --git
Conflicts with the possibility to online sub-section chunks. Revert it
for now.
Signed-off-by: David Hildenbrand
---
drivers/base/node.c| 2 --
include/linux/memory.h | 1 -
mm/memory_hotplug.c| 27 +++
mm/page_alloc.c| 28
of each page. E.g. to not read memory that is not online
during kexec() or to properly mark a section as offline as soon as all
contained pages are offline.
Signed-off-by: David Hildenbrand
---
include/linux/page-flags.h | 10 ++
include/trace/events/mmflags.h | 9 -
2 files
This allows user space to skip pages that are offline when dumping. This is
especially relevant when dealing with pages that have been unplugged in
the context of virtualization, and their backing storage has already
been freed.
Signed-off-by: David Hildenbrand
---
kernel/crash_core.c | 3
If any page is still online, the section should stay online.
Signed-off-by: David Hildenbrand
---
mm/page_alloc.c | 2 +-
mm/sparse.c | 25 -
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2e5dcfdb0908
Let document offline_pages() while at it.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 22 --
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 3a8d56476233..1d6054edc241 100644
--- a/mm/memory_hotplug.c
++
Kernel modules that want to control how/when memory is onlined/offlined
need these functions.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index ac14ea772792..3c374d308cf4 100644
If any page is still online, the section should stay online.
Signed-off-by: David Hildenbrand
---
mm/page_alloc.c | 2 +-
mm/sparse.c | 25 -
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2e5dcfdb0908
e onlining/offlining will still work, however
the memory will not be actually onlined/offlined. That has to be handled
by the device driver that owns the memory.
Signed-off-by: David Hildenbrand
---
drivers/base/memory.c | 22 ++
drivers/xen/balloon.c | 2 +-
inc
On 13.04.2018 15:32, David Hildenbrand wrote:
> If any page is still online, the section should stay online.
>
> Signed-off-by: David Hildenbrand
> ---
This is a duplicate, please ignore.
(get_maintainers.sh and my mail server had a little clinch, so I had to
send half of th
On 13.04.2018 15:40, Michal Hocko wrote:
> On Fri 13-04-18 15:16:26, David Hildenbrand wrote:
>> online_pages()/offline_pages() theoretically allows us to work on
>> sub-section sizes. This is especially relevant in the context of
>> virtualization. It e.g. allows us to add/r
On 13.04.2018 17:59, Michal Hocko wrote:
> On Fri 13-04-18 15:33:28, David Hildenbrand wrote:
>> Some devices (esp. paravirtualized) might want to control
>> - when to online/offline a memory block
>> - how to online memory (MOVABLE/NORMAL)
>> - in which granulari
On 13.04.2018 15:16, David Hildenbrand wrote:
> online_pages()/offline_pages() theoretically allows us to work on
> sub-section sizes. This is especially relevant in the context of
> virtualization. It e.g. allows us to add/remove memory to Linux in a VM in
> 4MB chunks.
>
&g
On 19.04.2018 23:13, Tony Krowiak wrote:
> Introduces a new function to reset the crypto attributes for all
> vcpus whether they are running or not. Each vcpu in KVM will
> be removed from SIE prior to resetting the crypto attributes in its
> SIE state description. After all vcpus have had their
On 20.04.2018 14:26, David Hildenbrand wrote:
> On 19.04.2018 23:13, Tony Krowiak wrote:
>> Introduces a new function to reset the crypto attributes for all
>> vcpus whether they are running or not. Each vcpu in KVM will
>> be removed from SIE prior to resetting the crypto att
On 22.04.2018 05:01, Matthew Wilcox wrote:
> On Sat, Apr 21, 2018 at 06:52:18PM +0200, Vlastimil Babka wrote:
>> On 04/13/2018 07:11 PM, Matthew Wilcox wrote:
>>> On Fri, Apr 13, 2018 at 03:16:26PM +0200, David Hildenbrand wrote:
>>>> online_pages()/offline_pages() t
On 22.04.2018 16:02, Matthew Wilcox wrote:
> On Sun, Apr 22, 2018 at 10:17:31AM +0200, David Hildenbrand wrote:
>> On 22.04.2018 05:01, Matthew Wilcox wrote:
>>> On Sat, Apr 21, 2018 at 06:52:18PM +0200, Vlastimil Babka wrote:
>>>> Sounds like your newly introduce
On 13.04.2018 19:11, Matthew Wilcox wrote:
> On Fri, Apr 13, 2018 at 03:16:26PM +0200, David Hildenbrand wrote:
>> online_pages()/offline_pages() theoretically allows us to work on
>> sub-section sizes. This is especially relevant in the context of
>> virtualization. It
On 13.04.2018 15:40, Michal Hocko wrote:
> On Fri 13-04-18 15:16:26, David Hildenbrand wrote:
>> online_pages()/offline_pages() theoretically allows us to work on
>> sub-section sizes. This is especially relevant in the context of
>> virtualization. It e.g. allows us to add/r
On 09.04.2018 05:26, Stefan Hajnoczi wrote:
> On Thu, Apr 05, 2018 at 08:09:26AM -0400, Pankaj Gupta wrote:
>>> Will this raw file already have the "disk information header" (no idea
>>> how that stuff is called) encoded? Are there any plans/possible ways to
>>>
>>> a) automatically create the
{
> kvm_mmu_set_mask_ptes(VMX_EPT_READABLE_MASK,
> enable_ept_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull,
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
s_sel));
> #endif
> if (boot_cpu_has(X86_FEATURE_MPX))
> rdmsrl(MSR_IA32_BNDCFGS, vmx->host_state.msr_host_bndcfgs);
>
Much better indeed :)
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 09.04.2018 10:37, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of these IOCTLs is usually a mix of L1
gt; @@ -604,7 +604,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2
> *entry, u32 function,
>(1 << KVM_FEATURE_PV_EOI) |
>(1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
> (1 << KV
On 27.02.2018 15:28, Tony Krowiak wrote:
> Set effective masks and CRYCB format in the shadow copy of the
> guest level 2 CRYCB.
>
> Signed-off-by: Tony Krowiak
> ---
> arch/s390/include/asm/kvm-ap.h |2 +
> arch/s390/kvm/kvm-ap.c |5 +++
> arch/s390/kvm/vsie.c | 71
On 27.02.2018 15:28, Tony Krowiak wrote:
> Introduces a new interface to enable AP interpretive
> execution (IE) mode for the KVM guest. When running
> with IE mode enabled, AP instructions executed on the
> KVM guest will be interpreted by the firmware and
> passed directly through to an AP
On 27.02.2018 15:28, Tony Krowiak wrote:
> Introduces a new interface to enable AP interpretive
> execution (IE) mode for the KVM guest. When running
> with IE mode enabled, AP instructions executed on the
> KVM guest will be interpreted by the firmware and
> passed directly through to an AP
On 27.02.2018 15:28, Tony Krowiak wrote:
> Introduces a new CPU model feature and two CPU model
> facilities to support AP virtualization for KVM guests.
>
> CPU model feature:
>
> The KVM_S390_VM_CPU_FEAT_AP feature indicates that the
> AP facilities are installed on the KVM guest. This
>
> +static int vfio_ap_mdev_open(struct mdev_device *mdev)
> +{
> + struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
> + unsigned long events;
> + int ret;
> +
> + matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
> + events =
+ cpuid_count(7, 0, , , , );
> +
> + return edx & bit(KVM_CPUID_BIT_SPEC_CTRL);
> +}
> +
> +static inline bool cpu_has_ibpb_support(void)
> +{
> + return cpuid_ebx(0x8008) & bit(KVM_CPUID_BIT_IBPB_SUPPORT);
> +}
> +
> static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
> {
> return vcpu->arch.msr_platform_info & MSR_PLATFORM_INFO_CPUID_FAULT;
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
> { .index = MSR_IA32_SYSENTER_CS,.always = true },
>
The function set_msr_interception() is really named confusingly.
If we pass in a "1", we clear the bit, resulting in no interception.
So 1==no intercept, 0==intercept. This function would be better name
>>> + /*
>>> +* If the (L2) guest does a vmfunc to the currently
>>> +* active ept pointer, we don't have to do anything else
>>> +*/
>>> + if (vmcs12->ept_pointer != address) {
>>> + if (address >> cpuid_maxphyaddr(vcpu) ||
>>> + !IS_ALIGNED(address, 4096))
> /*
> * The exit handlers return 1 if the exit was handled fully and guest
> execution
> * may resume. Otherwise they set the kvm_run parameter to indicate what
> needs
> @@ -7790,6 +7806,7 @@ static int (*const kvm_vmx_exit_handlers[])(struct
> kvm_vcpu *vcpu) = {
>
> @@ -7752,7 +7769,29 @@ static int handle_preemption_timer(struct kvm_vcpu
> *vcpu)
>
> static int handle_vmfunc(struct kvm_vcpu *vcpu)
> {
> - kvm_queue_exception(vcpu, UD_VECTOR);
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
> + struct vmcs12 *vmcs12;
> + u32 function =
On 10.07.2017 11:17, Paolo Bonzini wrote:
>
>
> On 10/07/2017 10:54, David Hildenbrand wrote:
>>
>>> /*
>>> * The exit handlers return 1 if the exit was handled fully and guest
>>> execution
>>> * may resume. Otherwise they se
On 10.07.2017 11:20, Claudio Imbrenda wrote:
Minor minor nit:
The subject should state "creating or destroying a VM"
> This patch adds a few lines to the KVM common code to fire a
> KOBJ_CHANGE uevent whenever a KVM VM is created or destroyed. The event
> carries five environment variables:
>
On 10.07.2017 13:03, Paolo Bonzini wrote:
>
>
> On 10/07/2017 11:17, David Hildenbrand wrote:
>>> +
>>> + vmcs12 = get_vmcs12(vcpu);
>>> + if ((vmcs12->vm_function_control & (1 << function)) == 0)
>> (learned that in c, shifting b
es really start to get too lengthy to be useful)
> + return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
> +
> if (vmcs12->cr3_target_count > nested_cpu_vmx_misc_cr3_count(vcpu))
> return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
>
>
Feel free to ignore my comments.
Reviewed-by: David Hildenbrand
--
Thanks,
David
On 10.07.2017 22:49, Bandan Das wrote:
> When L2 uses vmfunc, L0 utilizes the associated vmexit to
> emulate a switching of the ept pointer by reloading the
> guest MMU.
>
> Signed-off-by: Paolo Bonzini
> Signed-off-by: Bandan Das
> ---
> arch/x86/include/asm/vmx.h | 6 +
>
kvm_pit_set_reinject(pit, false);
> hrtimer_cancel(>pit_state.timer);
> kthread_destroy_worker(pit->worker);
>
I tried to verify that we don't have any hierarchical locking inversion
in the way here (kvm->lock, kvm->slots_lock, kvm->irq_lock).
Hopefully I did this carefully enough :)
Reviewed-by: David Hildenbrand
--
Thanks,
David
On 05.07.2017 12:35, Paolo Bonzini wrote:
> The uniprocessor version of smp_call_function_many does not evaluate
> all of its argument, and the compiler emits a warning about "wait"
> being unused. This breaks the build on architectures for which
> "-Werror" is enabled by default.
>
> Work
3] =
> + vmcs_read64(GUEST_PHYSICAL_ADDRESS);
vcpu->run->internal.data[vcpu->run->internal.ndata++] = ...
So we don't have to name the position explicitly.
Whatever you prefer.
Reviewed-by: David Hildenbrand
> + }
> return 0;
> }
>
>
--
Thanks,
David
On 05.07.2017 14:24, Paolo Bonzini wrote:
>
>
> On 05/07/2017 14:22, David Hildenbrand wrote:
>>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>>> index f0fe9d02f6bb..09368501d9cf 100644
>>> --- a/virt/kvm/kvm_main.c
>>> +++ b/virt/kvm/kvm
> + add_uevent_var(env, "CREATED=%llu", created);
> + add_uevent_var(env, "COUNT=%llu", active);
I like that much better.
> +
> + if (type == KVM_EVENT_CREATE_VM)
> + add_uevent_var(env, "EVENT=create");
> + else if (type == KVM_EVENT_DESTROY_VM)
> +
Hi Martin,
thanks for having a look at this!
>> However, we
>> might be able to let 2k and 4k page tables co-exist. We only need
>> 4k page tables whenever we want to expose such memory to a guest. So
>> turning on 4k page table allocation at one point and only allowing such
>> memory to go into
On 01.06.2017 13:39, Christian Borntraeger wrote:
> On 05/29/2017 06:32 PM, David Hildenbrand wrote:
>
>> new = old = pgste_get_lock(ptep);
>> pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT |
>> @@ -748,6 +764,11 @@ int reset_guest_reference_bit(struct mm_s
> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
> index d4d409b..b22c2b6 100644
> --- a/arch/s390/mm/pgtable.c
> +++ b/arch/s390/mm/pgtable.c
> @@ -196,7 +196,7 @@ static inline pgste_t ptep_xchg_start(struct mm_struct
> *mm,
> {
> pgste_t pgste = __pgste(0);
>
> - if
On 02.06.2017 09:02, Heiko Carstens wrote:
> On Thu, Jun 01, 2017 at 12:46:51PM +0200, Martin Schwidefsky wrote:
>>> Unfortunately, converting all page tables to 4k pgste page tables is
>>> not possible without provoking various race conditions.
>>
>> That is one approach we tried and was found to
On 06.12.2017 15:40, David Hildenbrand wrote:
> On 06.12.2017 12:59, Wanpeng Li wrote:
>> From: Wanpeng Li
>>
>> *** Guest State ***
>> CR0: actual=0x0030, shadow=0x6010,
>> gh_mask=fff7
>> CR4: actual=0x
861e77b9 100644
> --- a/arch/x86/kvm/mmu_audit.c
> +++ b/arch/x86/kvm/mmu_audit.c
> @@ -19,7 +19,7 @@
>
> #include
>
> -char const *audit_point_name[] = {
> +static char const *audit_point_name[] = {
> "pre page fault",
> "post page fault",
> "pre pte write",
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
mx_arm_hv_timer(vcpu);
>
> @@ -9510,8 +9512,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu
> *vcpu)
> );
>
> /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
> - if (debugctlmsr)
> - update_debugctlmsr(debugctlmsr);
> + if (vmx->host_debugctlmsr)
> + update_debugctlmsr(vmx->host_debugctlmsr);
>
> #ifndef CONFIG_X86_64
> /*
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
radically by
> print
> when running the testcase heavy multithreading. The do_debug() which is
> triggered
> by rep movsb will splash when (dr6 & DR_STEP && !user_mode(regs)).
>
> This patch fixes it by restoring host dr6 unconditionally before preempt/irq
> enable.
>
On 08.12.2017 11:22, Quan Xu wrote:
> From: Quan Xu
>
> Since KVM removes the only I/O port 0x80 bypass on Intel hosts,
> clear CPU_BASED_USE_IO_BITMAPS and set CPU_BASED_UNCOND_IO_EXITING
> bit. Then these I/O permission bitmaps are not used at all, so
> drop I/O permission bitmaps.
>
>
On 10.12.2017 22:44, Wanpeng Li wrote:
> From: Wanpeng Li
>
> [ cut here ]
> Bad FPU state detected at kvm_put_guest_fpu+0xd8/0x2d0 [kvm], reinitializing
> FPU registers.
> WARNING: CPU: 1 PID: 4594 at arch/x86/mm/extable.c:103
> ex_handler_fprestore+0x88/0x90
> CPU:
On 10.12.2017 01:44, Wanpeng Li wrote:
> 2017-12-08 20:39 GMT+08:00 David Hildenbrand :
>> On 08.12.2017 10:12, Wanpeng Li wrote:
>>> From: Wanpeng Li
>>>
>>> Reported by syzkaller:
>>>
>>>WARNING: CPU: 0 PID: 12927 at arch/x86/kernel/t
On 11.12.2017 22:51, Wanpeng Li wrote:
> 2017-12-12 4:48 GMT+08:00 David Hildenbrand :
>> On 10.12.2017 22:44, Wanpeng Li wrote:
>>> From: Wanpeng Li
>>>
>>> [ cut here ]
>>> Bad FPU state detected at kvm_put_guest_fpu+0xd8/0
On 06.11.2017 17:14, Paolo Bonzini wrote:
> On 06/11/2017 17:01, David Hildenbrand wrote:
>> On 06.11.2017 16:10, Nick Desaulniers wrote:
>>> Does it have to be stack allocated?
>>
>> We can't use kmalloc and friends in emulate.c. We would have to
>&
On 06.11.2017 17:37, Paolo Bonzini wrote:
> On 06/11/2017 17:19, David Hildenbrand wrote:
>> On 06.11.2017 17:14, Paolo Bonzini wrote:
>>> On 06/11/2017 17:01, David Hildenbrand wrote:
>>>> On 06.11.2017 16:10, Nick Desaulniers wrote:
>>>>> Does it h
On 31.10.2017 12:47, Dmitry Vyukov wrote:
> On Tue, Oct 31, 2017 at 2:34 PM, syzbot
>
> wrote:
>> Hello,
>>
>> syzkaller hit the following crash on
>> 0787643a5f6aad1f0cdeb305f7fe492b71943ea4
>> git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/master
>> compiler: gcc (GCC) 7.1.1
ways_inline struct cpuid_reg
> x86_feature_cpuid(unsigned x86_feature)
> {
> unsigned x86_leaf = x86_feature / 32;
>
> - BUILD_BUG_ON(!__builtin_constant_p(x86_leaf));
> BUILD_BUG_ON(x86_leaf >= ARRAY_SIZE(reverse_cpuid));
> BUILD_BUG_ON(reverse_cpuid[x86_leaf].function == 0);
>
>
Reviewed-by: David Hildenbrand
--
Thanks,
David
~CPU_BASED_USE_IO_BITMAPS;
> exec_control |= CPU_BASED_UNCOND_IO_EXITING;
>
> Thanks,
>
> Paolo
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
>
> switch (ioctl) {
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index ba8134a989c1..2e700753e35c 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1607,12 +1607,11 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu
>
On 12.12.2017 18:48, Paolo Bonzini wrote:
> On 12/12/2017 18:47, David Hildenbrand wrote:
>>> @@ -2547,13 +2547,13 @@ static long kvm_vcpu_ioctl(struct file *filp,
>>> #if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_MIPS)
>> can we now also
print
> when running the testcase heavy multithreading. The do_debug() which is
> triggered
> by rep movsb will splash when (dr6 & DR_STEP && !user_mode(regs)).
>
> This patch fixes it by restoring host dr6 in sched_out if no breakpoint is
> active.
>
> Report
> if (unlikely(vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED)) {
> @@ -7709,6 +7706,7 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu,
> struct kvm_fpu *fpu)
> static void fx_init(struct kvm_vcpu *vcpu)
> {
> fpstate_init(>arch.guest_fpu.state);
> +
_S390_INTERRUPT || ioctl == KVM_S390_IRQ || ioctl ==
> KVM_INTERRUPT)
> - return kvm_arch_vcpu_ioctl(filp, ioctl, arg);
> -#endif
> -
> + r = kvm_arch_vcpu_async_ioctl(filp, ioctl, arg);
> + if (r != -ENOIOCTLCMD)
> + return r;
>
> if (mutex_lock_killable(>mutex))
> return -EINTR;
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 13.12.2017 02:33, Wanpeng Li wrote:
> From: Wanpeng Li
>
> This patch reuses the preempted field in kvm_steal_time, and will export
> the vcpu running/pre-empted information to the guest from host. This will
> enable guest to intelligently send ipi to running vcpus and set flag for
>
On 13.12.2017 12:38, Wanpeng Li wrote:
> 2017-12-13 18:20 GMT+08:00 David Hildenbrand :
>> On 13.12.2017 02:33, Wanpeng Li wrote:
>>> From: Wanpeng Li
>>>
>>> This patch reuses the preempted field in kvm_steal_time, and will export
>>> the vcpu run
On 18.01.21 14:21, Anshuman Khandual wrote:
>
>
> On 1/18/21 6:43 PM, Anshuman Khandual wrote:
>> From: David Hildenbrand
>>
>> Right now, we only check against MAX_PHYSMEM_BITS - but turns out there
>> are more restrictions of which memory we can actuall
G_ON() check that would ensure that memhp_range_allowed() has already
> been called on the hotplug path.
>
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: David Hildenbrand
> Cc: linux-s...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Acked-by: Heiko Carstens
hotplug call chain, which inevitably fails the hotplug itself.
>
> This mechanism was suggested by David Hildenbrand during another discussion
> with respect to a memory hotplug fix on arm64 platform.
>
> https://lore.kernel.org/linux-arm-kernel/1600332402-30123-1-git-send-emai
On 18.01.21 14:12, Anshuman Khandual wrote:
> This introduces memhp_range_allowed() which can be called in various memory
> hotplug paths to prevalidate the address range which is being added, with
> the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range()
> which provides
a VM_BUG_ON() check that would ensure that memhp_range_allowed()
> has already been called.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Ard Biesheuvel
> Cc: Mark Rutland
> Cc: David Hildenbrand
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-kernel@vg
On 18.01.21 14:21, Anshuman Khandual wrote:
>
>
> On 1/18/21 6:43 PM, Anshuman Khandual wrote:
>> From: David Hildenbrand
>>
>> Right now, we only check against MAX_PHYSMEM_BITS - but turns out there
>> are more restrictions of which memory we can actuall
duce the section size to 128MB for 4K
> and 16K base page size configs, and to 512MB for 64K base page size config.
>
> Signed-off-by: Sudarshan Rajagopalan
> Suggested-by: Anshuman Khandual
> Suggested-by: David Hildenbrand
> Cc: Catalin Marinas
> Cc: Will Deacon
>
s bits in the !vmemmap case. Also section size needs to be multiple
>> of 128MB to have PMD based vmemmap mapping with CONFIG_ARM64_4K_PAGES.
>>
>> Given these constraints, lets just reduce the section size to 128MB for 4K
>> and 16K base page size configs, and to 512MB fo
+2084,6 @@ unsigned long __init memblock_free_all(void)
>
> pages = free_low_memory_core_early();
> totalram_pages_add(pages);
> -
> - return pages;
> }
>
> #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_ARCH_KEEP_MEMBLOCK)
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 14.01.21 12:31, Miaohe Lin wrote:
> When gbl_reserve is 0, hugetlb_acct_memory() will do nothing except holding
> and releasing hugetlb_lock.
So, what's the deal then? Adding more code?
If this is a performance improvement, we should spell it out. Otherwise
I don't see a real benefit of this
s, like CommitLimit going negative.
>*/
> if (hstate_is_gigantic(h))
> - adjust_managed_page_count(page, 1 << h->order);
> + adjust_managed_page_count(page, pages_per_huge_page(h));
> cond_resched();
> }
> }
>
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 16.01.21 04:40, John Hubbard wrote:
> On 1/15/21 11:46 AM, David Hildenbrand wrote:
>>>> 7) There is no easy way to detect if a page really was pinned: we might
>>>> have false positives. Further, there is no way to distinguish if it was
>>>> pinned with F
ot cause a functional change.
>
> THat says _what_ you are doing, but I have no idea _why_ this is needed
> for anything...
>
Let's add something like
"Let's make it consistent, especially preparing for more users of the
'mhp' terminology."
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 25.01.21 07:22, Anshuman Khandual wrote:
>
> On 12/22/20 12:42 PM, Anshuman Khandual wrote:
>> pfn_valid() asserts that there is a memblock entry for a given pfn without
>> MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is
>> that they do not have memblock entries.
On 25.01.21 08:41, Muchun Song wrote:
> On Mon, Jan 25, 2021 at 2:40 PM Muchun Song wrote:
>>
>> On Mon, Jan 25, 2021 at 8:05 AM David Rientjes wrote:
>>>
>>>
>>> On Sun, 17 Jan 2021, Muchun Song wrote:
>>>
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index
On 25.01.21 10:52, Anshuman Khandual wrote:
>
>
> On 1/25/21 2:55 PM, David Hildenbrand wrote:
>> On 25.01.21 03:58, Anshuman Khandual wrote:
>>> This series adds a mechanism allowing platforms to weigh in and prevalidate
>>> incoming address range before p
On 25.01.21 11:39, Oscar Salvador wrote:
> On Tue, Jan 19, 2021 at 02:58:41PM +0100, David Hildenbrand wrote:
>> IIRC, there is a conflict with the hpage vmemmap freeing patch set,
>> right? How are we going to handle that?
>
> First of all, sorry for the lateness
801 - 900 of 3857 matches
Mail list logo