Re: [PATCH] kfifo: add memory barrier in kfifo to prevent data loss

2019-01-02 Thread xiaoguangrong(Xiao Guangrong)
On 12/12/18 8:50 AM, Kees Cook wrote: > On Mon, Dec 10, 2018 at 7:41 PM wrote: >> >> From: Yulei Zhang >> >> Early this year we spot there may be two issues in kernel >> kfifo. >> >> One is reported by Xiao Guangrong to linux kernel. >> htt

Re: [PATCH] KVM: try __get_user_pages_fast even if not in atomic context

2018-08-06 Thread Xiao Guangrong
On 07/27/2018 11:46 PM, Paolo Bonzini wrote: We are currently cutting hva_to_pfn_fast short if we do not want an immediate exit, which is represented by !async && !atomic. However, this is unnecessary, and __get_user_pages_fast is *much* faster because the regular get_user_pages takes

Re: [PATCH] KVM: try __get_user_pages_fast even if not in atomic context

2018-08-06 Thread Xiao Guangrong
On 07/27/2018 11:46 PM, Paolo Bonzini wrote: We are currently cutting hva_to_pfn_fast short if we do not want an immediate exit, which is represented by !async && !atomic. However, this is unnecessary, and __get_user_pages_fast is *much* faster because the regular get_user_pages takes

Re: [PATCH] KVM/MMU: Combine flushing remote tlb in mmu_set_spte()

2018-07-25 Thread Xiao Guangrong
set_spte(). Signed-off-by: Lan Tianyu Looks good, but I'd like a second opinion. Guangrong, Junaid, can you review this? It looks good to me. Reviewed-by: Xiao Guangrong BTW, the @intel box is not accessible to me now. ;)

Re: [PATCH] KVM/MMU: Combine flushing remote tlb in mmu_set_spte()

2018-07-25 Thread Xiao Guangrong
set_spte(). Signed-off-by: Lan Tianyu Looks good, but I'd like a second opinion. Guangrong, Junaid, can you review this? It looks good to me. Reviewed-by: Xiao Guangrong BTW, the @intel box is not accessible to me now. ;)

Is read barrier missed in kfifo?

2018-05-11 Thread Xiao Guangrong
Hi, Currently, there is no read barrier between reading the index (kfifo.in) and fetching the real data from the fifo. I am afraid that will cause the vfifo is observed as not empty however the data is not actually ready for read. Right? Thanks!

Is read barrier missed in kfifo?

2018-05-11 Thread Xiao Guangrong
Hi, Currently, there is no read barrier between reading the index (kfifo.in) and fetching the real data from the fifo. I am afraid that will cause the vfifo is observed as not empty however the data is not actually ready for read. Right? Thanks!

Re: [PATCH] KVM: X86: Fix SMRAM accessing even if VM is shutdown

2018-02-10 Thread Xiao Guangrong
On 02/09/2018 08:42 PM, Paolo Bonzini wrote: On 09/02/2018 04:22, Xiao Guangrong wrote: That is a good question... :) This case (with KVM_MEMSLOT_INVALID is set) can be easily constructed, userspace should avoid this case by itself (avoiding vCPU accessing the memslot which is being

Re: [PATCH] KVM: X86: Fix SMRAM accessing even if VM is shutdown

2018-02-10 Thread Xiao Guangrong
On 02/09/2018 08:42 PM, Paolo Bonzini wrote: On 09/02/2018 04:22, Xiao Guangrong wrote: That is a good question... :) This case (with KVM_MEMSLOT_INVALID is set) can be easily constructed, userspace should avoid this case by itself (avoiding vCPU accessing the memslot which is being

Re: [PATCH] KVM: X86: Fix SMRAM accessing even if VM is shutdown

2018-02-08 Thread Xiao Guangrong
On 02/08/2018 06:31 PM, Paolo Bonzini wrote: On 08/02/2018 09:57, Xiao Guangrong wrote: Maybe it should return RET_PF_EMULATE, which would cause an emulation failure and then an exit with KVM_EXIT_INTERNAL_ERROR. So the root cause is that a running vCPU accessing the memory whose memslot

Re: [PATCH] KVM: X86: Fix SMRAM accessing even if VM is shutdown

2018-02-08 Thread Xiao Guangrong
On 02/08/2018 06:31 PM, Paolo Bonzini wrote: On 08/02/2018 09:57, Xiao Guangrong wrote: Maybe it should return RET_PF_EMULATE, which would cause an emulation failure and then an exit with KVM_EXIT_INTERNAL_ERROR. So the root cause is that a running vCPU accessing the memory whose memslot

Re: [PATCH] KVM: X86: Fix SMRAM accessing even if VM is shutdown

2018-02-08 Thread Xiao Guangrong
On 02/07/2018 10:16 PM, Paolo Bonzini wrote: On 07/02/2018 07:25, Wanpeng Li wrote: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 786cd00..445e702 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7458,6 +7458,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,

Re: [PATCH] KVM: X86: Fix SMRAM accessing even if VM is shutdown

2018-02-08 Thread Xiao Guangrong
On 02/07/2018 10:16 PM, Paolo Bonzini wrote: On 07/02/2018 07:25, Wanpeng Li wrote: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 786cd00..445e702 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7458,6 +7458,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,

Re: [PATCH v5 0/2] KVM: MMU: fix kvm_is_mmio_pfn()

2017-11-08 Thread Xiao Guangrong
type, the performance of guest accesses to those pages would be harmed. Therefore, we check the host memory type in addition and only treat UC/UC- pages as MMIO. Reviewed-by: Xiao Guangrong <xiaoguangr...@tencent.com>

Re: [PATCH v5 0/2] KVM: MMU: fix kvm_is_mmio_pfn()

2017-11-08 Thread Xiao Guangrong
type, the performance of guest accesses to those pages would be harmed. Therefore, we check the host memory type in addition and only treat UC/UC- pages as MMIO. Reviewed-by: Xiao Guangrong

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-07 Thread Xiao Guangrong
On 11/03/2017 05:29 PM, Haozhong Zhang wrote: On 11/03/17 17:24 +0800, Xiao Guangrong wrote: On 11/03/2017 05:02 PM, Haozhong Zhang wrote: On 11/03/17 16:51 +0800, Haozhong Zhang wrote: On 11/03/17 14:54 +0800, Xiao Guangrong wrote: On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-07 Thread Xiao Guangrong
On 11/03/2017 05:29 PM, Haozhong Zhang wrote: On 11/03/17 17:24 +0800, Xiao Guangrong wrote: On 11/03/2017 05:02 PM, Haozhong Zhang wrote: On 11/03/17 16:51 +0800, Haozhong Zhang wrote: On 11/03/17 14:54 +0800, Xiao Guangrong wrote: On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-03 Thread Xiao Guangrong
On 11/03/2017 05:02 PM, Haozhong Zhang wrote: On 11/03/17 16:51 +0800, Haozhong Zhang wrote: On 11/03/17 14:54 +0800, Xiao Guangrong wrote: On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-03 Thread Xiao Guangrong
On 11/03/2017 05:02 PM, Haozhong Zhang wrote: On 11/03/17 16:51 +0800, Haozhong Zhang wrote: On 11/03/17 14:54 +0800, Xiao Guangrong wrote: On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-03 Thread Xiao Guangrong
On 11/03/2017 04:51 PM, Haozhong Zhang wrote: On 11/03/17 14:54 +0800, Xiao Guangrong wrote: On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped with cached memory type for better performance. However

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-03 Thread Xiao Guangrong
On 11/03/2017 04:51 PM, Haozhong Zhang wrote: On 11/03/17 14:54 +0800, Xiao Guangrong wrote: On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped with cached memory type for better performance. However

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-03 Thread Xiao Guangrong
On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped with cached memory type for better performance. However, the above check misconceives those pages as MMIO. Because KVM maps MMIO pages with UC memory

Re: [PATCH v4 3/3] KVM: MMU: consider host cache mode in MMIO page check

2017-11-03 Thread Xiao Guangrong
On 11/03/2017 01:53 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped with cached memory type for better performance. However, the above check misconceives those pages as MMIO. Because KVM maps MMIO pages with UC memory

Re: [PATCH v2 2/2] KVM: MMU: consider host cache mode in MMIO page check

2017-11-02 Thread Xiao Guangrong
On 10/31/2017 07:48 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped with cached memory type for better performance. However, the above check misconceives those pages as MMIO. Because KVM maps MMIO pages with UC memory

Re: [PATCH v2 2/2] KVM: MMU: consider host cache mode in MMIO page check

2017-11-02 Thread Xiao Guangrong
On 10/31/2017 07:48 PM, Haozhong Zhang wrote: Some reserved pages, such as those from NVDIMM DAX devices, are not for MMIO, and can be mapped with cached memory type for better performance. However, the above check misconceives those pages as MMIO. Because KVM maps MMIO pages with UC memory

Re: [PATCH 0/3] KVM: MMU: fix kvm_is_mmio_pfn()

2017-10-31 Thread Xiao Guangrong
On 10/27/2017 10:25 AM, Haozhong Zhang wrote: [I just copy the commit message from patch 3] By default, KVM treats a reserved page as for MMIO purpose, and maps it to guest with UC memory type. However, some reserved pages are not for MMIO, such as pages of DAX device (e.g., /dev/daxX.Y).

Re: [PATCH 0/3] KVM: MMU: fix kvm_is_mmio_pfn()

2017-10-31 Thread Xiao Guangrong
On 10/27/2017 10:25 AM, Haozhong Zhang wrote: [I just copy the commit message from patch 3] By default, KVM treats a reserved page as for MMIO purpose, and maps it to guest with UC memory type. However, some reserved pages are not for MMIO, such as pages of DAX device (e.g., /dev/daxX.Y).

Re: [PATCH v2 0/7] KVM: MMU: fast write protect

2017-07-03 Thread Xiao Guangrong
On 07/03/2017 11:47 PM, Paolo Bonzini wrote: On 03/07/2017 16:39, Xiao Guangrong wrote: On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote: From: Xiao Guangrong <xiaoguangr...@tencent.com> Changelog in v2: thanks to Paolo's review, this version disables write-protect-all

Re: [PATCH v2 0/7] KVM: MMU: fast write protect

2017-07-03 Thread Xiao Guangrong
On 07/03/2017 11:47 PM, Paolo Bonzini wrote: On 03/07/2017 16:39, Xiao Guangrong wrote: On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote: From: Xiao Guangrong Changelog in v2: thanks to Paolo's review, this version disables write-protect-all if PML is supported Hi Paolo, Do

Re: [PATCH v2 0/7] KVM: MMU: fast write protect

2017-07-03 Thread Xiao Guangrong
On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote: From: Xiao Guangrong <xiaoguangr...@tencent.com> Changelog in v2: thanks to Paolo's review, this version disables write-protect-all if PML is supported Hi Paolo, Do you have time to have a look at this new version? ;) Or I shoul

Re: [PATCH v2 0/7] KVM: MMU: fast write protect

2017-07-03 Thread Xiao Guangrong
On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote: From: Xiao Guangrong Changelog in v2: thanks to Paolo's review, this version disables write-protect-all if PML is supported Hi Paolo, Do you have time to have a look at this new version? ;) Or I should wait until the patchset

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-06-08 Thread Xiao Guangrong
On 05/30/2017 12:48 AM, Paolo Bonzini wrote: On 23/05/2017 04:23, Xiao Guangrong wrote: Ping... Sorry to disturb, just make this patchset not be missed. :) It won't. :) I'm going to look at it and the dirty page ring buffer this week. Ping.. :)

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-06-08 Thread Xiao Guangrong
On 05/30/2017 12:48 AM, Paolo Bonzini wrote: On 23/05/2017 04:23, Xiao Guangrong wrote: Ping... Sorry to disturb, just make this patchset not be missed. :) It won't. :) I'm going to look at it and the dirty page ring buffer this week. Ping.. :)

Re: [Qemu-devel] [PATCH 0/7] KVM: MMU: fast write protect

2017-06-05 Thread Xiao Guangrong
On 06/05/2017 03:36 PM, Jay Zhou wrote: /* enable ucontrol for s390 */ struct kvm_s390_ucas_mapping { diff --git a/memory.c b/memory.c index 4c95aaf..b836675 100644 --- a/memory.c +++ b/memory.c @@ -809,6 +809,13 @@ static void address_space_update_ioeventfds(AddressSpace *as)

Re: [Qemu-devel] [PATCH 0/7] KVM: MMU: fast write protect

2017-06-05 Thread Xiao Guangrong
On 06/05/2017 03:36 PM, Jay Zhou wrote: /* enable ucontrol for s390 */ struct kvm_s390_ucas_mapping { diff --git a/memory.c b/memory.c index 4c95aaf..b836675 100644 --- a/memory.c +++ b/memory.c @@ -809,6 +809,13 @@ static void address_space_update_ioeventfds(AddressSpace *as)

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-05-22 Thread Xiao Guangrong
Ping... Sorry to disturb, just make this patchset not be missed. :) On 05/04/2017 03:06 PM, Paolo Bonzini wrote: On 04/05/2017 05:36, Xiao Guangrong wrote: Great. As there is no conflict between these two patchsets except dirty ring pages takes benefit from write-protect-all, i think

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-05-22 Thread Xiao Guangrong
Ping... Sorry to disturb, just make this patchset not be missed. :) On 05/04/2017 03:06 PM, Paolo Bonzini wrote: On 04/05/2017 05:36, Xiao Guangrong wrote: Great. As there is no conflict between these two patchsets except dirty ring pages takes benefit from write-protect-all, i think

Re: [PATCH 2/2] KVM: nVMX: fix nEPT handling of guest page table accesses

2017-05-12 Thread Xiao Guangrong
CC Kevin as i am not sure if Intel is aware of this issue, it breaks other hypervisors, e.g, Xen, as swell. On 05/11/2017 07:23 PM, Paolo Bonzini wrote: The new ept_access_test_paddr_read_only_ad_disabled testcase caused an infinite stream of EPT violations because KVM did not find anything

Re: [PATCH 2/2] KVM: nVMX: fix nEPT handling of guest page table accesses

2017-05-12 Thread Xiao Guangrong
CC Kevin as i am not sure if Intel is aware of this issue, it breaks other hypervisors, e.g, Xen, as swell. On 05/11/2017 07:23 PM, Paolo Bonzini wrote: The new ept_access_test_paddr_read_only_ad_disabled testcase caused an infinite stream of EPT violations because KVM did not find anything

Re: [PATCH 1/2] KVM: nVMX: fix EPT permissions as reported in exit qualification

2017-05-11 Thread Xiao Guangrong
On 05/12/2017 11:59 AM, Xiao Guangrong wrote: error: @@ -452,7 +459,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, */ if (!(errcode & PFERR_RSVD_MASK)) { vcpu->arch.exit_qualification &= 0x187; -vcpu->arch.exit_qualification

Re: [PATCH 1/2] KVM: nVMX: fix EPT permissions as reported in exit qualification

2017-05-11 Thread Xiao Guangrong
On 05/12/2017 11:59 AM, Xiao Guangrong wrote: error: @@ -452,7 +459,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, */ if (!(errcode & PFERR_RSVD_MASK)) { vcpu->arch.exit_qualification &= 0x187; -vcpu->arch.exit_qualification

Re: [PATCH 1/2] KVM: nVMX: fix EPT permissions as reported in exit qualification

2017-05-11 Thread Xiao Guangrong
ification |= ((pt_access & pte) & 0x7) << 3; ^ here, the original code is buggy as pt_access and pte have different bit order, fortunately, this patch fixes it too. :) Otherwise it looks good to me, thanks for your fix. Reviewed-by: Xiao Guangrong <xiaoguangr...@tencent.com>

Re: [PATCH 1/2] KVM: nVMX: fix EPT permissions as reported in exit qualification

2017-05-11 Thread Xiao Guangrong
amp; pte) & 0x7) << 3; ^ here, the original code is buggy as pt_access and pte have different bit order, fortunately, this patch fixes it too. :) Otherwise it looks good to me, thanks for your fix. Reviewed-by: Xiao Guangrong

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-05-03 Thread Xiao Guangrong
On 05/03/2017 10:57 PM, Paolo Bonzini wrote: On 03/05/2017 16:50, Xiao Guangrong wrote: Furthermore, userspace has no knowledge about if PML is enable (it can be required from sysfs, but it is a good way in QEMU), so it is difficult for the usespace to know when to use write-protect-all

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-05-03 Thread Xiao Guangrong
On 05/03/2017 10:57 PM, Paolo Bonzini wrote: On 03/05/2017 16:50, Xiao Guangrong wrote: Furthermore, userspace has no knowledge about if PML is enable (it can be required from sysfs, but it is a good way in QEMU), so it is difficult for the usespace to know when to use write-protect-all

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-05-03 Thread Xiao Guangrong
On 05/03/2017 08:28 PM, Paolo Bonzini wrote: So if I understand correctly this relies on userspace doing: 1) KVM_GET_DIRTY_LOG without write protect 2) KVM_WRITE_PROTECT_ALL_MEM Writes may happen between 1 and 2; they are not represented in the live dirty bitmap but

Re: [PATCH 0/7] KVM: MMU: fast write protect

2017-05-03 Thread Xiao Guangrong
On 05/03/2017 08:28 PM, Paolo Bonzini wrote: So if I understand correctly this relies on userspace doing: 1) KVM_GET_DIRTY_LOG without write protect 2) KVM_WRITE_PROTECT_ALL_MEM Writes may happen between 1 and 2; they are not represented in the live dirty bitmap but

Re: [PATCH] x86, kvm: Handle PFNs outside of kernel reach when touching GPTEs

2017-04-17 Thread Xiao Guangrong
On 04/12/2017 09:16 PM, Sironi, Filippo wrote: Thanks for taking the time and sorry for the delay. On 6. Apr 2017, at 16:22, Radim Krčmář wrote: 2017-04-05 15:07+0200, Filippo Sironi: cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the

Re: [PATCH] x86, kvm: Handle PFNs outside of kernel reach when touching GPTEs

2017-04-17 Thread Xiao Guangrong
On 04/12/2017 09:16 PM, Sironi, Filippo wrote: Thanks for taking the time and sorry for the delay. On 6. Apr 2017, at 16:22, Radim Krčmář wrote: 2017-04-05 15:07+0200, Filippo Sironi: cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct

Re: [PATCH 11/16] fpga: intel: fme: add partial reconfiguration sub feature support

2017-03-30 Thread Xiao Guangrong
On 31/03/2017 4:30 AM, Alan Tull wrote: On Thu, Mar 30, 2017 at 7:08 AM, Wu Hao wrote: From: Kang Luwei Partial Reconfiguration (PR) is the most important function for FME. It allows reconfiguration for given Port/Accelerated Function Unit (AFU).

Re: [PATCH 11/16] fpga: intel: fme: add partial reconfiguration sub feature support

2017-03-30 Thread Xiao Guangrong
On 31/03/2017 4:30 AM, Alan Tull wrote: On Thu, Mar 30, 2017 at 7:08 AM, Wu Hao wrote: From: Kang Luwei Partial Reconfiguration (PR) is the most important function for FME. It allows reconfiguration for given Port/Accelerated Function Unit (AFU). This patch adds support for PR sub

Re: [PATCH v2] KVM: x86: cleanup the page tracking SRCU instance

2017-03-28 Thread Xiao Guangrong
ion is called in kvm_arch_init_vm(). Otherwise it looks great to me: Reviewed-by: Xiao Guangrong <xiaoguangrong.e...@gmail.com> Thanks for the fix.

Re: [PATCH v2] KVM: x86: cleanup the page tracking SRCU instance

2017-03-28 Thread Xiao Guangrong
ion is called in kvm_arch_init_vm(). Otherwise it looks great to me: Reviewed-by: Xiao Guangrong Thanks for the fix.

Re: [PATCH v2] mm, proc: Fix region lost in /proc/self/smaps

2016-09-19 Thread Xiao Guangrong
On 09/14/2016 11:38 PM, Oleg Nesterov wrote: On 09/13, Dave Hansen wrote: On 09/13/2016 07:59 AM, Oleg Nesterov wrote: I agree. I don't even understand why this was considered as a bug. Obviously, m_stop() which drops mmap_sep should not be called, or all the threads should be stopped, if

Re: [PATCH v2] mm, proc: Fix region lost in /proc/self/smaps

2016-09-19 Thread Xiao Guangrong
On 09/14/2016 11:38 PM, Oleg Nesterov wrote: On 09/13, Dave Hansen wrote: On 09/13/2016 07:59 AM, Oleg Nesterov wrote: I agree. I don't even understand why this was considered as a bug. Obviously, m_stop() which drops mmap_sep should not be called, or all the threads should be stopped, if

Re: [PATCH v2] mm, proc: Fix region lost in /proc/self/smaps

2016-09-12 Thread Xiao Guangrong
On 09/13/2016 03:10 AM, Michal Hocko wrote: On Mon 12-09-16 08:01:06, Dave Hansen wrote: On 09/12/2016 05:54 AM, Michal Hocko wrote: In order to fix this bug, we make 'file->version' indicate the end address of current VMA Doesn't this open doors to another weird cases. Say B would be

Re: [PATCH v2] mm, proc: Fix region lost in /proc/self/smaps

2016-09-12 Thread Xiao Guangrong
On 09/13/2016 03:10 AM, Michal Hocko wrote: On Mon 12-09-16 08:01:06, Dave Hansen wrote: On 09/12/2016 05:54 AM, Michal Hocko wrote: In order to fix this bug, we make 'file->version' indicate the end address of current VMA Doesn't this open doors to another weird cases. Say B would be

Re: DAX mapping detection (was: Re: [PATCH] Fix region lost in /proc/self/smaps)

2016-09-12 Thread Xiao Guangrong
On 09/12/2016 11:44 AM, Rudoff, Andy wrote: Whether msync/fsync can make data persistent depends on ADR feature on memory controller, if it exists everything works well, otherwise, we need to have another interface that is why 'Flush hint table' in ACPI comes in. 'Flush hint table' is

Re: DAX mapping detection (was: Re: [PATCH] Fix region lost in /proc/self/smaps)

2016-09-12 Thread Xiao Guangrong
On 09/12/2016 11:44 AM, Rudoff, Andy wrote: Whether msync/fsync can make data persistent depends on ADR feature on memory controller, if it exists everything works well, otherwise, we need to have another interface that is why 'Flush hint table' in ACPI comes in. 'Flush hint table' is

Re: DAX mapping detection (was: Re: [PATCH] Fix region lost in /proc/self/smaps)

2016-09-12 Thread Xiao Guangrong
On 09/09/2016 11:40 PM, Dan Williams wrote: On Fri, Sep 9, 2016 at 1:55 AM, Xiao Guangrong <guangrong.x...@linux.intel.com> wrote: [..] Whether a persistent memory mapping requires an msync/fsync is a filesystem specific question. This mincore proposal is separate from that. Co

Re: DAX mapping detection (was: Re: [PATCH] Fix region lost in /proc/self/smaps)

2016-09-12 Thread Xiao Guangrong
On 09/09/2016 11:40 PM, Dan Williams wrote: On Fri, Sep 9, 2016 at 1:55 AM, Xiao Guangrong wrote: [..] Whether a persistent memory mapping requires an msync/fsync is a filesystem specific question. This mincore proposal is separate from that. Consider device-DAX for volatile memory

[PATCH v2] mm, proc: Fix region lost in /proc/self/smaps

2016-09-11 Thread Xiao Guangrong
address range may be outputted twice, e.g: Take two example VMAs: vma-A: (0x1000 -> 0x2000) vma-B: (0x2000 -> 0x3000) read() #1: prints vma-A, sets m->version=0x2000 Now, merge A/B to make C: vma-C: (0x1000 -> 0x3000) read() #2: find_vma(m->version=0x2000),

[PATCH v2] mm, proc: Fix region lost in /proc/self/smaps

2016-09-11 Thread Xiao Guangrong
address range may be outputted twice, e.g: Take two example VMAs: vma-A: (0x1000 -> 0x2000) vma-B: (0x2000 -> 0x3000) read() #1: prints vma-A, sets m->version=0x2000 Now, merge A/B to make C: vma-C: (0x1000 -> 0x3000) read() #2: find_vma(m->version=0x200

Re: DAX mapping detection (was: Re: [PATCH] Fix region lost in /proc/self/smaps)

2016-09-09 Thread Xiao Guangrong
On 09/09/2016 07:04 AM, Dan Williams wrote: On Thu, Sep 8, 2016 at 3:56 PM, Ross Zwisler <ross.zwis...@linux.intel.com> wrote: On Wed, Sep 07, 2016 at 09:32:36PM -0700, Dan Williams wrote: [ adding linux-fsdevel and linux-nvdimm ] On Wed, Sep 7, 2016 at 8:36 PM, Xiao Guangrong <gu

Re: DAX mapping detection (was: Re: [PATCH] Fix region lost in /proc/self/smaps)

2016-09-09 Thread Xiao Guangrong
On 09/09/2016 07:04 AM, Dan Williams wrote: On Thu, Sep 8, 2016 at 3:56 PM, Ross Zwisler wrote: On Wed, Sep 07, 2016 at 09:32:36PM -0700, Dan Williams wrote: [ adding linux-fsdevel and linux-nvdimm ] On Wed, Sep 7, 2016 at 8:36 PM, Xiao Guangrong wrote: [..] However, it is not easy

Re: [PATCH] Fix region lost in /proc/self/smaps

2016-09-09 Thread Xiao Guangrong
On 09/08/2016 10:05 PM, Dave Hansen wrote: On 09/07/2016 08:36 PM, Xiao Guangrong wrote:>> The user will see two VMAs in their output: A: 0x1000->0x2000 C: 0x1000->0x3000 Will it confuse them to see the same virtual address range twice? Or is there somethin

Re: [PATCH] Fix region lost in /proc/self/smaps

2016-09-09 Thread Xiao Guangrong
On 09/08/2016 10:05 PM, Dave Hansen wrote: On 09/07/2016 08:36 PM, Xiao Guangrong wrote:>> The user will see two VMAs in their output: A: 0x1000->0x2000 C: 0x1000->0x3000 Will it confuse them to see the same virtual address range twice? Or is there somethin

Re: [PATCH] Fix region lost in /proc/self/smaps

2016-09-07 Thread Xiao Guangrong
On 09/08/2016 12:34 AM, Dave Hansen wrote: On 09/06/2016 11:51 PM, Xiao Guangrong wrote: In order to fix this bug, we make 'file->version' indicate the next VMA we want to handle This new approach makes it more likely that we'll skip a new VMA that gets inserted in between the read

Re: [PATCH] Fix region lost in /proc/self/smaps

2016-09-07 Thread Xiao Guangrong
On 09/08/2016 12:34 AM, Dave Hansen wrote: On 09/06/2016 11:51 PM, Xiao Guangrong wrote: In order to fix this bug, we make 'file->version' indicate the next VMA we want to handle This new approach makes it more likely that we'll skip a new VMA that gets inserted in between the read

Re: [PATCH] Fix region lost in /proc/self/smaps

2016-09-07 Thread Xiao Guangrong
Sorry, the title should be [PATCH] mm, proc: Fix region lost in /proc/self/smaps On 09/07/2016 02:51 PM, Xiao Guangrong wrote: Recently, Redhat reported that nvml test suite failed on QEMU/KVM, more detailed info please refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1365721 Actually

Re: [PATCH] Fix region lost in /proc/self/smaps

2016-09-07 Thread Xiao Guangrong
Sorry, the title should be [PATCH] mm, proc: Fix region lost in /proc/self/smaps On 09/07/2016 02:51 PM, Xiao Guangrong wrote: Recently, Redhat reported that nvml test suite failed on QEMU/KVM, more detailed info please refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1365721 Actually

[PATCH] Fix region lost in /proc/self/smaps

2016-09-07 Thread Xiao Guangrong
e lost if the last VMA is gone, eg: The process VMA list is A->B->C->D CPU 0 CPU 1 read() system call handle VMA B version = B return to userspace unmap VMA B issue read() again to continue to get the region in

[PATCH] Fix region lost in /proc/self/smaps

2016-09-07 Thread Xiao Guangrong
e lost if the last VMA is gone, eg: The process VMA list is A->B->C->D CPU 0 CPU 1 read() system call handle VMA B version = B return to userspace unmap VMA B issue read() again to continue to get the region inf

Re: DAX can not work on virtual nvdimm device

2016-08-31 Thread Xiao Guangrong
On 08/31/2016 01:09 AM, Dan Williams wrote: Can you post your exact reproduction steps? This test is not failing for me. Sure. 1. make the guest kernel based on your tree, the top commit is 10d7902fa0e82b (dax: unmap/truncate on device shutdown) and the config file can be found in

Re: DAX can not work on virtual nvdimm device

2016-08-31 Thread Xiao Guangrong
On 08/31/2016 01:09 AM, Dan Williams wrote: Can you post your exact reproduction steps? This test is not failing for me. Sure. 1. make the guest kernel based on your tree, the top commit is 10d7902fa0e82b (dax: unmap/truncate on device shutdown) and the config file can be found in

Re: DAX can not work on virtual nvdimm device

2016-08-30 Thread Xiao Guangrong
On 08/30/2016 03:30 AM, Ross Zwisler wrote: Can you please verify that you are using "usable" memory for your memmap? All the details are here: https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system Sure. This is the BIOS E820 info in

Re: DAX can not work on virtual nvdimm device

2016-08-30 Thread Xiao Guangrong
On 08/30/2016 03:30 AM, Ross Zwisler wrote: Can you please verify that you are using "usable" memory for your memmap? All the details are here: https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system Sure. This is the BIOS E820 info in

Re: DAX can not work on virtual nvdimm device

2016-08-29 Thread Xiao Guangrong
Hi Ross, Sorry for the delay, i just returned back from KVM Forum. On 08/20/2016 02:30 AM, Ross Zwisler wrote: On Fri, Aug 19, 2016 at 07:59:29AM -0700, Dan Williams wrote: On Fri, Aug 19, 2016 at 4:19 AM, Xiao Guangrong <guangrong.x...@linux.intel.com> wrote: Hi Dan, Recently,

Re: DAX can not work on virtual nvdimm device

2016-08-29 Thread Xiao Guangrong
Hi Ross, Sorry for the delay, i just returned back from KVM Forum. On 08/20/2016 02:30 AM, Ross Zwisler wrote: On Fri, Aug 19, 2016 at 07:59:29AM -0700, Dan Williams wrote: On Fri, Aug 19, 2016 at 4:19 AM, Xiao Guangrong wrote: Hi Dan, Recently, Redhat reported that nvml test suite

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-06 Thread Xiao Guangrong
On 07/06/2016 07:48 PM, Paolo Bonzini wrote: On 06/07/2016 06:02, Xiao Guangrong wrote: May I ask you what the exact issue you have with this interface for Intel to support your own GPU virtualization? Intel's vGPU can work with this framework. We really appreciate your / nvidia's

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-06 Thread Xiao Guangrong
On 07/06/2016 07:48 PM, Paolo Bonzini wrote: On 06/07/2016 06:02, Xiao Guangrong wrote: May I ask you what the exact issue you have with this interface for Intel to support your own GPU virtualization? Intel's vGPU can work with this framework. We really appreciate your / nvidia's

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/06/2016 10:57 AM, Neo Jia wrote: On Wed, Jul 06, 2016 at 10:35:18AM +0800, Xiao Guangrong wrote: On 07/06/2016 10:18 AM, Neo Jia wrote: On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote: On 07/05/2016 08:18 PM, Paolo Bonzini wrote: On 05/07/2016 07:41, Neo Jia

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/06/2016 10:57 AM, Neo Jia wrote: On Wed, Jul 06, 2016 at 10:35:18AM +0800, Xiao Guangrong wrote: On 07/06/2016 10:18 AM, Neo Jia wrote: On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote: On 07/05/2016 08:18 PM, Paolo Bonzini wrote: On 05/07/2016 07:41, Neo Jia

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/06/2016 10:18 AM, Neo Jia wrote: On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote: On 07/05/2016 08:18 PM, Paolo Bonzini wrote: On 05/07/2016 07:41, Neo Jia wrote: On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote: The vGPU folks would like to trap

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/06/2016 10:18 AM, Neo Jia wrote: On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote: On 07/05/2016 08:18 PM, Paolo Bonzini wrote: On 05/07/2016 07:41, Neo Jia wrote: On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote: The vGPU folks would like to trap

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 11:07 PM, Neo Jia wrote: On Tue, Jul 05, 2016 at 05:02:46PM +0800, Xiao Guangrong wrote: It is physically contiguous but it is done during the runtime, physically contiguous doesn't mean static partition at boot time. And only during runtime, the proper HW resource

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 11:07 PM, Neo Jia wrote: On Tue, Jul 05, 2016 at 05:02:46PM +0800, Xiao Guangrong wrote: It is physically contiguous but it is done during the runtime, physically contiguous doesn't mean static partition at boot time. And only during runtime, the proper HW resource

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 08:18 PM, Paolo Bonzini wrote: On 05/07/2016 07:41, Neo Jia wrote: On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote: The vGPU folks would like to trap the first access to a BAR by setting vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 08:18 PM, Paolo Bonzini wrote: On 05/07/2016 07:41, Neo Jia wrote: On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote: The vGPU folks would like to trap the first access to a BAR by setting vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 03:30 PM, Neo Jia wrote: (Just for completeness, if you really want to use a device in above example as VFIO passthru, the second step is not completely handled in userspace, it is actually the guest driver who will allocate and setup the proper hw resource which will later

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 03:30 PM, Neo Jia wrote: (Just for completeness, if you really want to use a device in above example as VFIO passthru, the second step is not completely handled in userspace, it is actually the guest driver who will allocate and setup the proper hw resource which will later

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 01:16 PM, Neo Jia wrote: On Tue, Jul 05, 2016 at 12:02:42PM +0800, Xiao Guangrong wrote: On 07/05/2016 09:35 AM, Neo Jia wrote: On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote: On 07/04/2016 11:33 PM, Neo Jia wrote: Sorry, I think I misread

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-05 Thread Xiao Guangrong
On 07/05/2016 01:16 PM, Neo Jia wrote: On Tue, Jul 05, 2016 at 12:02:42PM +0800, Xiao Guangrong wrote: On 07/05/2016 09:35 AM, Neo Jia wrote: On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote: On 07/04/2016 11:33 PM, Neo Jia wrote: Sorry, I think I misread

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/05/2016 09:35 AM, Neo Jia wrote: On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote: On 07/04/2016 11:33 PM, Neo Jia wrote: Sorry, I think I misread the "allocation" as "mapping". We only delay the cpu mapping, not the allocation. So how to under

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/05/2016 09:35 AM, Neo Jia wrote: On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote: On 07/04/2016 11:33 PM, Neo Jia wrote: Sorry, I think I misread the "allocation" as "mapping". We only delay the cpu mapping, not the allocation. So how to under

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/04/2016 11:33 PM, Neo Jia wrote: Sorry, I think I misread the "allocation" as "mapping". We only delay the cpu mapping, not the allocation. So how to understand your statement: "at that moment nobody has any knowledge about how the physical mmio gets virtualized" The resource,

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/04/2016 11:33 PM, Neo Jia wrote: Sorry, I think I misread the "allocation" as "mapping". We only delay the cpu mapping, not the allocation. So how to understand your statement: "at that moment nobody has any knowledge about how the physical mmio gets virtualized" The resource,

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/04/2016 05:16 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 04:45:05PM +0800, Xiao Guangrong wrote: On 07/04/2016 04:41 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote: On 07/04/2016 03:53 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 03:37:35PM +0800

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/04/2016 05:16 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 04:45:05PM +0800, Xiao Guangrong wrote: On 07/04/2016 04:41 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote: On 07/04/2016 03:53 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 03:37:35PM +0800

Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

2016-07-04 Thread Xiao Guangrong
On 07/04/2016 04:45 PM, Xiao Guangrong wrote: On 07/04/2016 04:41 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote: On 07/04/2016 03:53 PM, Neo Jia wrote: On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote: On 07/04/2016 03:03 PM, Neo Jia

  1   2   3   4   5   6   7   8   9   10   >