On 08/01/20 21:24, Sean Christopherson wrote:
> - level = host_pfn_mapping_level(vcpu, gfn, pfn);
> + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
> + if (!memslot_valid_for_gpte(slot, true))
> + return PT_PAGE_TABLE_LEVEL;
Following up on my remark to patch 7, this can also
On 09/01/20 20:47, Barret Rhoden wrote:
> Hi -
>
> On 1/8/20 3:24 PM, Sean Christopherson wrote:
>> This series is a mix of bug fixes, cleanup and new support in KVM's
>> handling of huge pages. The series initially stemmed from a syzkaller
>> bug report[1], which is fixed by patch 02, "mm: thp:
On 08/01/20 21:24, Sean Christopherson wrote:
> +
> + /*
> + * Manually do the equivalent of kvm_vcpu_gfn_to_hva() to avoid the
> + * "writable" check in __gfn_to_hva_many(), which will always fail on
> + * read-only memslots due to gfn_to_hva() assuming writes. Earlier
> +
On 09/01/20 22:04, Thomas Gleixner wrote:
> Sean Christopherson writes:
>
>> diff --git a/arch/x86/include/asm/pgtable_types.h
>> b/arch/x86/include/asm/pgtable_types.h
>> index b5e49e6bac63..400ac8da75e8 100644
>> --- a/arch/x86/include/asm/pgtable_types.h
>> +++
On 08/01/20 21:24, Sean Christopherson wrote:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5f7f06824c2b..d9aced677ddd 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1418,15 +1418,23 @@ EXPORT_SYMBOL_GPL(kvm_is_visible_gfn);
>
> unsigned long
On 16/12/19 18:59, Barret Rhoden wrote:
> Does KVM-x86 need its own names for the levels? If not, I could convert
> the PT_PAGE_TABLE_* stuff to PG_LEVEL_* stuff.
Yes, please do. For the 2M/4M case, it's only incorrect to use 2M here:
if (PTTYPE == 32 && walker->level ==
On 11/12/19 22:32, Barret Rhoden wrote:
> This patchset allows KVM to map huge pages for DAX-backed files.
>
> I held previous versions in limbo while people were sorting out whether
> or not DAX pages were going to remain PageReserved and how that relates
> to KVM.
>
> Now that that is sorted
On 11/12/19 22:32, Barret Rhoden wrote:
> + /*
> + * Our caller grabbed the KVM mmu_lock with a successful
> + * mmu_notifier_retry, so we're safe to walk the page table.
> + */
> + switch (dev_pagemap_mapping_shift(hva, current->mm)) {
> + case PMD_SHIFT:
> + case
On 14/11/18 22:51, Barret Rhoden wrote:
> KVM has a use case for determining the size of a dax mapping. The KVM
> code has easy access to the address and the mm; hence the change in
> parameters.
>
> Signed-off-by: Barret Rhoden
> Reviewed-by: David Hildenbrand
> ---
> include/linux/mm.h |
On 13/11/2018 17:21, Barret Rhoden wrote:
> On 2018-11-12 at 20:31 Paolo Bonzini wrote:
>> Looks good. What's the plan for removing PageReserved from DAX pages?
>
> I hear that's going on in this thread:
>
> https://lore.kernel.org/lkml/154145268025.30046.1174265234596
On 13/11/2018 11:02, Pankaj Gupta wrote:
>
>>
>> On 09.11.18 21:39, Barret Rhoden wrote:
>>> This change allows KVM to map DAX-backed files made of huge pages with
>>> huge mappings in the EPT/TDP.
>>>
>>> DAX pages are not PageTransCompound. The existing check is trying to
>>> determine if the
On 09/11/2018 21:39, Barret Rhoden wrote:
> This change allows KVM to map DAX-backed files made of huge pages with
> huge mappings in the EPT/TDP.
>
> DAX pages are not PageTransCompound. The existing check is trying to
> determine if the mapping for the pfn is a huge mapping or not. For
>
On 06/11/2018 22:05, Barret Rhoden wrote:
> On 2018-10-29 at 17:07 Barret Rhoden wrote:
>> Another issue is that kvm_mmu_zap_collapsible_spte() also uses
>> PageTransCompoundMap() to detect huge pages, but we don't have a way to
>> get the HVA easily. Can we just aggressively zap DAX pages
On 02/11/2018 21:32, Barret Rhoden wrote:
> One of the other things I noticed was some places in KVM make a
> distinction between kvm_is_reserved_pfn and PageReserved:
>
> void kvm_set_pfn_dirty(kvm_pfn_t pfn)
> {
> if (!kvm_is_reserved_pfn(pfn)) {
> struct page *page =
On 31/10/2018 22:16, Dan Williams wrote:
>> No, please don't. The kvm_is_reserved_pfn() check is for correctness,
>> the page-size check is for optimization. In theory you could have a
>> ZONE_DEVICE area that is smaller than 2MB and thus does not use huge pages.
> To be clear, I was not
On 29/10/2018 23:25, Dan Williams wrote:
> I'm wondering if we're adding an explicit is_zone_device_page() check
> in this path to determine the page mapping size if that can be a
> replacement for the kvm_is_reserved_pfn() check. In other words, the
> goal of fixing up PageReserved() was to
On 30/10/2018 20:45, Barret Rhoden wrote:
> On 2018-10-29 at 20:10 Dan Williams wrote:
>> The property of DAX pages that requires special coordination is the
>> fact that the device hosting the pages can be disabled at will. The
>> get_dev_pagemap() api is the interface to pin a device-pfn so
On 20/07/2018 16:11, Zhang,Yi wrote:
> Added Jiang,Dave,
>
> Ping for further review, comments.
I need an Acked-by from the MM people to merge this. Jan, Dan?
Paolo
>
> Thanks All
>
> Regards
> Yi.
>
>
> On 2018年07月11日 01:01, Zhang Yi wrote:
>> For device specific memory space, when we
On 04/07/2018 16:50, Dan Williams wrote:
>> + return is_zone_device_page(page) &&
>> + ((page->pgmap->type == MEMORY_DEVICE_FS_DAX) ||
>> +(page->pgmap->type == MEMORY_DEVICE_DEV_DAX));
> Jerome, might there be any use case to pass MEMORY_DEVICE_PUBLIC
> memory
On 04/07/2018 17:30, Zhang Yi wrote:
> For device specific memory space, when we move these area of pfn to
> memory zone, we will set the page reserved flag at that time, some of
> these reserved for device mmio, and some of these are not, such as
> NVDIMM pmem.
>
> Now, we map these dev_dax or
On 24/11/2017 14:02, Pankaj Gupta wrote:
>
>>>- Suggestion by Paolo & Stefan(previously) to use virtio-blk makes sense
>>>if just
>>> want a flush vehicle to send guest commands to host and get reply
>>> after asynchronous
>>> execution. There was previous discussion [1]
On 23/11/2017 17:14, Dan Williams wrote:
> On Wed, Nov 22, 2017 at 8:05 PM, Xiao Guangrong
> wrote:
>>
>>
>> On 11/22/2017 02:19 AM, Rik van Riel wrote:
>>
>>> We can go with the "best" interface for what
>>> could be a relatively slow flush (fsync on a
>>> file on
22 matches
Mail list logo