On 12/12/18 8:50 AM, Kees Cook wrote:
> On Mon, Dec 10, 2018 at 7:41 PM wrote:
>>
>> From: Yulei Zhang
>>
>> Early this year we spot there may be two issues in kernel
>> kfifo.
>>
>> One is reported by Xiao Guangrong to linux kernel.
>> htt
On 07/27/2018 11:46 PM, Paolo Bonzini wrote:
We are currently cutting hva_to_pfn_fast short if we do not want an
immediate exit, which is represented by !async && !atomic. However,
this is unnecessary, and __get_user_pages_fast is *much* faster
because the regular get_user_pages takes
On 07/27/2018 11:46 PM, Paolo Bonzini wrote:
We are currently cutting hva_to_pfn_fast short if we do not want an
immediate exit, which is represented by !async && !atomic. However,
this is unnecessary, and __get_user_pages_fast is *much* faster
because the regular get_user_pages takes
set_spte().
Signed-off-by: Lan Tianyu
Looks good, but I'd like a second opinion. Guangrong, Junaid, can you
review this?
It looks good to me.
Reviewed-by: Xiao Guangrong
BTW, the @intel box is not accessible to me now. ;)
set_spte().
Signed-off-by: Lan Tianyu
Looks good, but I'd like a second opinion. Guangrong, Junaid, can you
review this?
It looks good to me.
Reviewed-by: Xiao Guangrong
BTW, the @intel box is not accessible to me now. ;)
Hi,
Currently, there is no read barrier between reading the index
(kfifo.in) and fetching the real data from the fifo.
I am afraid that will cause the vfifo is observed as not empty
however the data is not actually ready for read. Right?
Thanks!
Hi,
Currently, there is no read barrier between reading the index
(kfifo.in) and fetching the real data from the fifo.
I am afraid that will cause the vfifo is observed as not empty
however the data is not actually ready for read. Right?
Thanks!
On 02/09/2018 08:42 PM, Paolo Bonzini wrote:
On 09/02/2018 04:22, Xiao Guangrong wrote:
That is a good question... :)
This case (with KVM_MEMSLOT_INVALID is set) can be easily constructed,
userspace should avoid this case by itself (avoiding vCPU accessing the
memslot which is being
On 02/09/2018 08:42 PM, Paolo Bonzini wrote:
On 09/02/2018 04:22, Xiao Guangrong wrote:
That is a good question... :)
This case (with KVM_MEMSLOT_INVALID is set) can be easily constructed,
userspace should avoid this case by itself (avoiding vCPU accessing the
memslot which is being
On 02/08/2018 06:31 PM, Paolo Bonzini wrote:
On 08/02/2018 09:57, Xiao Guangrong wrote:
Maybe it should return RET_PF_EMULATE, which would cause an emulation
failure and then an exit with KVM_EXIT_INTERNAL_ERROR.
So the root cause is that a running vCPU accessing the memory whose memslot
On 02/08/2018 06:31 PM, Paolo Bonzini wrote:
On 08/02/2018 09:57, Xiao Guangrong wrote:
Maybe it should return RET_PF_EMULATE, which would cause an emulation
failure and then an exit with KVM_EXIT_INTERNAL_ERROR.
So the root cause is that a running vCPU accessing the memory whose memslot
On 02/07/2018 10:16 PM, Paolo Bonzini wrote:
On 07/02/2018 07:25, Wanpeng Li wrote:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 786cd00..445e702 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7458,6 +7458,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
On 02/07/2018 10:16 PM, Paolo Bonzini wrote:
On 07/02/2018 07:25, Wanpeng Li wrote:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 786cd00..445e702 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7458,6 +7458,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
type, the
performance of guest accesses to those pages would be harmed.
Therefore, we check the host memory type in addition and only treat
UC/UC- pages as MMIO.
Reviewed-by: Xiao Guangrong <xiaoguangr...@tencent.com>
type, the
performance of guest accesses to those pages would be harmed.
Therefore, we check the host memory type in addition and only treat
UC/UC- pages as MMIO.
Reviewed-by: Xiao Guangrong
On 11/03/2017 05:29 PM, Haozhong Zhang wrote:
On 11/03/17 17:24 +0800, Xiao Guangrong wrote:
On 11/03/2017 05:02 PM, Haozhong Zhang wrote:
On 11/03/17 16:51 +0800, Haozhong Zhang wrote:
On 11/03/17 14:54 +0800, Xiao Guangrong wrote:
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some
On 11/03/2017 05:29 PM, Haozhong Zhang wrote:
On 11/03/17 17:24 +0800, Xiao Guangrong wrote:
On 11/03/2017 05:02 PM, Haozhong Zhang wrote:
On 11/03/17 16:51 +0800, Haozhong Zhang wrote:
On 11/03/17 14:54 +0800, Xiao Guangrong wrote:
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some
On 11/03/2017 05:02 PM, Haozhong Zhang wrote:
On 11/03/17 16:51 +0800, Haozhong Zhang wrote:
On 11/03/17 14:54 +0800, Xiao Guangrong wrote:
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped
On 11/03/2017 05:02 PM, Haozhong Zhang wrote:
On 11/03/17 16:51 +0800, Haozhong Zhang wrote:
On 11/03/17 14:54 +0800, Xiao Guangrong wrote:
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped
On 11/03/2017 04:51 PM, Haozhong Zhang wrote:
On 11/03/17 14:54 +0800, Xiao Guangrong wrote:
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped with cached memory type for better
performance. However
On 11/03/2017 04:51 PM, Haozhong Zhang wrote:
On 11/03/17 14:54 +0800, Xiao Guangrong wrote:
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped with cached memory type for better
performance. However
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped with cached memory type for better
performance. However, the above check misconceives those pages as
MMIO. Because KVM maps MMIO pages with UC memory
On 11/03/2017 01:53 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped with cached memory type for better
performance. However, the above check misconceives those pages as
MMIO. Because KVM maps MMIO pages with UC memory
On 10/31/2017 07:48 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped with cached memory type for better
performance. However, the above check misconceives those pages as
MMIO. Because KVM maps MMIO pages with UC memory
On 10/31/2017 07:48 PM, Haozhong Zhang wrote:
Some reserved pages, such as those from NVDIMM DAX devices, are
not for MMIO, and can be mapped with cached memory type for better
performance. However, the above check misconceives those pages as
MMIO. Because KVM maps MMIO pages with UC memory
On 10/27/2017 10:25 AM, Haozhong Zhang wrote:
[I just copy the commit message from patch 3]
By default, KVM treats a reserved page as for MMIO purpose, and maps
it to guest with UC memory type. However, some reserved pages are not
for MMIO, such as pages of DAX device (e.g., /dev/daxX.Y).
On 10/27/2017 10:25 AM, Haozhong Zhang wrote:
[I just copy the commit message from patch 3]
By default, KVM treats a reserved page as for MMIO purpose, and maps
it to guest with UC memory type. However, some reserved pages are not
for MMIO, such as pages of DAX device (e.g., /dev/daxX.Y).
On 07/03/2017 11:47 PM, Paolo Bonzini wrote:
On 03/07/2017 16:39, Xiao Guangrong wrote:
On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Changelog in v2:
thanks to Paolo's review, this version disables write-protect-all
On 07/03/2017 11:47 PM, Paolo Bonzini wrote:
On 03/07/2017 16:39, Xiao Guangrong wrote:
On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Changelog in v2:
thanks to Paolo's review, this version disables write-protect-all if
PML is supported
Hi Paolo,
Do
On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Changelog in v2:
thanks to Paolo's review, this version disables write-protect-all if
PML is supported
Hi Paolo,
Do you have time to have a look at this new version? ;)
Or I shoul
On 06/20/2017 05:15 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Changelog in v2:
thanks to Paolo's review, this version disables write-protect-all if
PML is supported
Hi Paolo,
Do you have time to have a look at this new version? ;)
Or I should wait until the patchset
On 05/30/2017 12:48 AM, Paolo Bonzini wrote:
On 23/05/2017 04:23, Xiao Guangrong wrote:
Ping...
Sorry to disturb, just make this patchset not be missed. :)
It won't. :) I'm going to look at it and the dirty page ring buffer
this week.
Ping.. :)
On 05/30/2017 12:48 AM, Paolo Bonzini wrote:
On 23/05/2017 04:23, Xiao Guangrong wrote:
Ping...
Sorry to disturb, just make this patchset not be missed. :)
It won't. :) I'm going to look at it and the dirty page ring buffer
this week.
Ping.. :)
On 06/05/2017 03:36 PM, Jay Zhou wrote:
/* enable ucontrol for s390 */
struct kvm_s390_ucas_mapping {
diff --git a/memory.c b/memory.c
index 4c95aaf..b836675 100644
--- a/memory.c
+++ b/memory.c
@@ -809,6 +809,13 @@ static void address_space_update_ioeventfds(AddressSpace
*as)
On 06/05/2017 03:36 PM, Jay Zhou wrote:
/* enable ucontrol for s390 */
struct kvm_s390_ucas_mapping {
diff --git a/memory.c b/memory.c
index 4c95aaf..b836675 100644
--- a/memory.c
+++ b/memory.c
@@ -809,6 +809,13 @@ static void address_space_update_ioeventfds(AddressSpace
*as)
Ping...
Sorry to disturb, just make this patchset not be missed. :)
On 05/04/2017 03:06 PM, Paolo Bonzini wrote:
On 04/05/2017 05:36, Xiao Guangrong wrote:
Great.
As there is no conflict between these two patchsets except dirty
ring pages takes benefit from write-protect-all, i think
Ping...
Sorry to disturb, just make this patchset not be missed. :)
On 05/04/2017 03:06 PM, Paolo Bonzini wrote:
On 04/05/2017 05:36, Xiao Guangrong wrote:
Great.
As there is no conflict between these two patchsets except dirty
ring pages takes benefit from write-protect-all, i think
CC Kevin as i am not sure if Intel is aware of this issue, it
breaks other hypervisors, e.g, Xen, as swell.
On 05/11/2017 07:23 PM, Paolo Bonzini wrote:
The new ept_access_test_paddr_read_only_ad_disabled testcase
caused an infinite stream of EPT violations because KVM did not
find anything
CC Kevin as i am not sure if Intel is aware of this issue, it
breaks other hypervisors, e.g, Xen, as swell.
On 05/11/2017 07:23 PM, Paolo Bonzini wrote:
The new ept_access_test_paddr_read_only_ad_disabled testcase
caused an infinite stream of EPT violations because KVM did not
find anything
On 05/12/2017 11:59 AM, Xiao Guangrong wrote:
error:
@@ -452,7 +459,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker
*walker,
*/
if (!(errcode & PFERR_RSVD_MASK)) {
vcpu->arch.exit_qualification &= 0x187;
-vcpu->arch.exit_qualification
On 05/12/2017 11:59 AM, Xiao Guangrong wrote:
error:
@@ -452,7 +459,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker
*walker,
*/
if (!(errcode & PFERR_RSVD_MASK)) {
vcpu->arch.exit_qualification &= 0x187;
-vcpu->arch.exit_qualification
ification |= ((pt_access & pte) & 0x7) << 3;
^ here, the
original code
is buggy as pt_access and pte have different bit order, fortunately, this patch
fixes it
too. :)
Otherwise it looks good to me, thanks for your fix.
Reviewed-by: Xiao Guangrong <xiaoguangr...@tencent.com>
amp; pte) & 0x7) << 3;
^ here, the
original code
is buggy as pt_access and pte have different bit order, fortunately, this patch
fixes it
too. :)
Otherwise it looks good to me, thanks for your fix.
Reviewed-by: Xiao Guangrong
On 05/03/2017 10:57 PM, Paolo Bonzini wrote:
On 03/05/2017 16:50, Xiao Guangrong wrote:
Furthermore, userspace has no knowledge about if PML is enable (it
can be required from sysfs, but it is a good way in QEMU), so it is
difficult for the usespace to know when to use write-protect-all
On 05/03/2017 10:57 PM, Paolo Bonzini wrote:
On 03/05/2017 16:50, Xiao Guangrong wrote:
Furthermore, userspace has no knowledge about if PML is enable (it
can be required from sysfs, but it is a good way in QEMU), so it is
difficult for the usespace to know when to use write-protect-all
On 05/03/2017 08:28 PM, Paolo Bonzini wrote:
So if I understand correctly this relies on userspace doing:
1) KVM_GET_DIRTY_LOG without write protect
2) KVM_WRITE_PROTECT_ALL_MEM
Writes may happen between 1 and 2; they are not represented in the live
dirty bitmap but
On 05/03/2017 08:28 PM, Paolo Bonzini wrote:
So if I understand correctly this relies on userspace doing:
1) KVM_GET_DIRTY_LOG without write protect
2) KVM_WRITE_PROTECT_ALL_MEM
Writes may happen between 1 and 2; they are not represented in the live
dirty bitmap but
On 04/12/2017 09:16 PM, Sironi, Filippo wrote:
Thanks for taking the time and sorry for the delay.
On 6. Apr 2017, at 16:22, Radim Krčmář wrote:
2017-04-05 15:07+0200, Filippo Sironi:
cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of
pages and the
On 04/12/2017 09:16 PM, Sironi, Filippo wrote:
Thanks for taking the time and sorry for the delay.
On 6. Apr 2017, at 16:22, Radim Krčmář wrote:
2017-04-05 15:07+0200, Filippo Sironi:
cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of
pages and the respective struct
On 31/03/2017 4:30 AM, Alan Tull wrote:
On Thu, Mar 30, 2017 at 7:08 AM, Wu Hao wrote:
From: Kang Luwei
Partial Reconfiguration (PR) is the most important function for FME. It
allows reconfiguration for given Port/Accelerated Function Unit (AFU).
On 31/03/2017 4:30 AM, Alan Tull wrote:
On Thu, Mar 30, 2017 at 7:08 AM, Wu Hao wrote:
From: Kang Luwei
Partial Reconfiguration (PR) is the most important function for FME. It
allows reconfiguration for given Port/Accelerated Function Unit (AFU).
This patch adds support for PR sub
ion is called in kvm_arch_init_vm().
Otherwise it looks great to me:
Reviewed-by: Xiao Guangrong <xiaoguangrong.e...@gmail.com>
Thanks for the fix.
ion is called in kvm_arch_init_vm().
Otherwise it looks great to me:
Reviewed-by: Xiao Guangrong
Thanks for the fix.
On 09/14/2016 11:38 PM, Oleg Nesterov wrote:
On 09/13, Dave Hansen wrote:
On 09/13/2016 07:59 AM, Oleg Nesterov wrote:
I agree. I don't even understand why this was considered as a bug.
Obviously, m_stop() which drops mmap_sep should not be called, or
all the threads should be stopped, if
On 09/14/2016 11:38 PM, Oleg Nesterov wrote:
On 09/13, Dave Hansen wrote:
On 09/13/2016 07:59 AM, Oleg Nesterov wrote:
I agree. I don't even understand why this was considered as a bug.
Obviously, m_stop() which drops mmap_sep should not be called, or
all the threads should be stopped, if
On 09/13/2016 03:10 AM, Michal Hocko wrote:
On Mon 12-09-16 08:01:06, Dave Hansen wrote:
On 09/12/2016 05:54 AM, Michal Hocko wrote:
In order to fix this bug, we make 'file->version' indicate the end address
of current VMA
Doesn't this open doors to another weird cases. Say B would be
On 09/13/2016 03:10 AM, Michal Hocko wrote:
On Mon 12-09-16 08:01:06, Dave Hansen wrote:
On 09/12/2016 05:54 AM, Michal Hocko wrote:
In order to fix this bug, we make 'file->version' indicate the end address
of current VMA
Doesn't this open doors to another weird cases. Say B would be
On 09/12/2016 11:44 AM, Rudoff, Andy wrote:
Whether msync/fsync can make data persistent depends on ADR feature on
memory controller, if it exists everything works well, otherwise, we need
to have another interface that is why 'Flush hint table' in ACPI comes
in. 'Flush hint table' is
On 09/12/2016 11:44 AM, Rudoff, Andy wrote:
Whether msync/fsync can make data persistent depends on ADR feature on
memory controller, if it exists everything works well, otherwise, we need
to have another interface that is why 'Flush hint table' in ACPI comes
in. 'Flush hint table' is
On 09/09/2016 11:40 PM, Dan Williams wrote:
On Fri, Sep 9, 2016 at 1:55 AM, Xiao Guangrong
<guangrong.x...@linux.intel.com> wrote:
[..]
Whether a persistent memory mapping requires an msync/fsync is a
filesystem specific question. This mincore proposal is separate from
that. Co
On 09/09/2016 11:40 PM, Dan Williams wrote:
On Fri, Sep 9, 2016 at 1:55 AM, Xiao Guangrong
wrote:
[..]
Whether a persistent memory mapping requires an msync/fsync is a
filesystem specific question. This mincore proposal is separate from
that. Consider device-DAX for volatile memory
address range may be outputted twice, e.g:
Take two example VMAs:
vma-A: (0x1000 -> 0x2000)
vma-B: (0x2000 -> 0x3000)
read() #1: prints vma-A, sets m->version=0x2000
Now, merge A/B to make C:
vma-C: (0x1000 -> 0x3000)
read() #2: find_vma(m->version=0x2000),
address range may be outputted twice, e.g:
Take two example VMAs:
vma-A: (0x1000 -> 0x2000)
vma-B: (0x2000 -> 0x3000)
read() #1: prints vma-A, sets m->version=0x2000
Now, merge A/B to make C:
vma-C: (0x1000 -> 0x3000)
read() #2: find_vma(m->version=0x200
On 09/09/2016 07:04 AM, Dan Williams wrote:
On Thu, Sep 8, 2016 at 3:56 PM, Ross Zwisler
<ross.zwis...@linux.intel.com> wrote:
On Wed, Sep 07, 2016 at 09:32:36PM -0700, Dan Williams wrote:
[ adding linux-fsdevel and linux-nvdimm ]
On Wed, Sep 7, 2016 at 8:36 PM, Xiao Guangrong
<gu
On 09/09/2016 07:04 AM, Dan Williams wrote:
On Thu, Sep 8, 2016 at 3:56 PM, Ross Zwisler
wrote:
On Wed, Sep 07, 2016 at 09:32:36PM -0700, Dan Williams wrote:
[ adding linux-fsdevel and linux-nvdimm ]
On Wed, Sep 7, 2016 at 8:36 PM, Xiao Guangrong
wrote:
[..]
However, it is not easy
On 09/08/2016 10:05 PM, Dave Hansen wrote:
On 09/07/2016 08:36 PM, Xiao Guangrong wrote:>> The user will see two
VMAs in their output:
A: 0x1000->0x2000
C: 0x1000->0x3000
Will it confuse them to see the same virtual address range twice? Or is
there somethin
On 09/08/2016 10:05 PM, Dave Hansen wrote:
On 09/07/2016 08:36 PM, Xiao Guangrong wrote:>> The user will see two
VMAs in their output:
A: 0x1000->0x2000
C: 0x1000->0x3000
Will it confuse them to see the same virtual address range twice? Or is
there somethin
On 09/08/2016 12:34 AM, Dave Hansen wrote:
On 09/06/2016 11:51 PM, Xiao Guangrong wrote:
In order to fix this bug, we make 'file->version' indicate the next VMA
we want to handle
This new approach makes it more likely that we'll skip a new VMA that
gets inserted in between the read
On 09/08/2016 12:34 AM, Dave Hansen wrote:
On 09/06/2016 11:51 PM, Xiao Guangrong wrote:
In order to fix this bug, we make 'file->version' indicate the next VMA
we want to handle
This new approach makes it more likely that we'll skip a new VMA that
gets inserted in between the read
Sorry, the title should be [PATCH] mm, proc: Fix region lost in /proc/self/smaps
On 09/07/2016 02:51 PM, Xiao Guangrong wrote:
Recently, Redhat reported that nvml test suite failed on QEMU/KVM,
more detailed info please refer to:
https://bugzilla.redhat.com/show_bug.cgi?id=1365721
Actually
Sorry, the title should be [PATCH] mm, proc: Fix region lost in /proc/self/smaps
On 09/07/2016 02:51 PM, Xiao Guangrong wrote:
Recently, Redhat reported that nvml test suite failed on QEMU/KVM,
more detailed info please refer to:
https://bugzilla.redhat.com/show_bug.cgi?id=1365721
Actually
e lost if the last VMA is gone, eg:
The process VMA list is A->B->C->D
CPU 0 CPU 1
read() system call
handle VMA B
version = B
return to userspace
unmap VMA B
issue read() again to continue to get
the region in
e lost if the last VMA is gone, eg:
The process VMA list is A->B->C->D
CPU 0 CPU 1
read() system call
handle VMA B
version = B
return to userspace
unmap VMA B
issue read() again to continue to get
the region inf
On 08/31/2016 01:09 AM, Dan Williams wrote:
Can you post your exact reproduction steps? This test is not failing for me.
Sure.
1. make the guest kernel based on your tree, the top commit is
10d7902fa0e82b (dax: unmap/truncate on device shutdown) and
the config file can be found in
On 08/31/2016 01:09 AM, Dan Williams wrote:
Can you post your exact reproduction steps? This test is not failing for me.
Sure.
1. make the guest kernel based on your tree, the top commit is
10d7902fa0e82b (dax: unmap/truncate on device shutdown) and
the config file can be found in
On 08/30/2016 03:30 AM, Ross Zwisler wrote:
Can you please verify that you are using "usable" memory for your memmap? All
the details are here:
https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system
Sure.
This is the BIOS E820 info in
On 08/30/2016 03:30 AM, Ross Zwisler wrote:
Can you please verify that you are using "usable" memory for your memmap? All
the details are here:
https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system
Sure.
This is the BIOS E820 info in
Hi Ross,
Sorry for the delay, i just returned back from KVM Forum.
On 08/20/2016 02:30 AM, Ross Zwisler wrote:
On Fri, Aug 19, 2016 at 07:59:29AM -0700, Dan Williams wrote:
On Fri, Aug 19, 2016 at 4:19 AM, Xiao Guangrong
<guangrong.x...@linux.intel.com> wrote:
Hi Dan,
Recently,
Hi Ross,
Sorry for the delay, i just returned back from KVM Forum.
On 08/20/2016 02:30 AM, Ross Zwisler wrote:
On Fri, Aug 19, 2016 at 07:59:29AM -0700, Dan Williams wrote:
On Fri, Aug 19, 2016 at 4:19 AM, Xiao Guangrong
wrote:
Hi Dan,
Recently, Redhat reported that nvml test suite
On 07/06/2016 07:48 PM, Paolo Bonzini wrote:
On 06/07/2016 06:02, Xiao Guangrong wrote:
May I ask you what the exact issue you have with this interface for
Intel to support
your own GPU virtualization?
Intel's vGPU can work with this framework. We really appreciate your
/ nvidia's
On 07/06/2016 07:48 PM, Paolo Bonzini wrote:
On 06/07/2016 06:02, Xiao Guangrong wrote:
May I ask you what the exact issue you have with this interface for
Intel to support
your own GPU virtualization?
Intel's vGPU can work with this framework. We really appreciate your
/ nvidia's
On 07/06/2016 10:57 AM, Neo Jia wrote:
On Wed, Jul 06, 2016 at 10:35:18AM +0800, Xiao Guangrong wrote:
On 07/06/2016 10:18 AM, Neo Jia wrote:
On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote:
On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
On 05/07/2016 07:41, Neo Jia
On 07/06/2016 10:57 AM, Neo Jia wrote:
On Wed, Jul 06, 2016 at 10:35:18AM +0800, Xiao Guangrong wrote:
On 07/06/2016 10:18 AM, Neo Jia wrote:
On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote:
On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
On 05/07/2016 07:41, Neo Jia
On 07/06/2016 10:18 AM, Neo Jia wrote:
On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote:
On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
On 05/07/2016 07:41, Neo Jia wrote:
On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote:
The vGPU folks would like to trap
On 07/06/2016 10:18 AM, Neo Jia wrote:
On Wed, Jul 06, 2016 at 10:00:46AM +0800, Xiao Guangrong wrote:
On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
On 05/07/2016 07:41, Neo Jia wrote:
On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote:
The vGPU folks would like to trap
On 07/05/2016 11:07 PM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 05:02:46PM +0800, Xiao Guangrong wrote:
It is physically contiguous but it is done during the runtime, physically
contiguous doesn't mean
static partition at boot time. And only during runtime, the proper HW resource
On 07/05/2016 11:07 PM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 05:02:46PM +0800, Xiao Guangrong wrote:
It is physically contiguous but it is done during the runtime, physically
contiguous doesn't mean
static partition at boot time. And only during runtime, the proper HW resource
On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
On 05/07/2016 07:41, Neo Jia wrote:
On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler
On 07/05/2016 08:18 PM, Paolo Bonzini wrote:
On 05/07/2016 07:41, Neo Jia wrote:
On Thu, Jun 30, 2016 at 03:01:49PM +0200, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler
On 07/05/2016 03:30 PM, Neo Jia wrote:
(Just for completeness, if you really want to use a device in above example as
VFIO passthru, the second step is not completely handled in userspace, it is
actually the guest
driver who will allocate and setup the proper hw resource which will later
On 07/05/2016 03:30 PM, Neo Jia wrote:
(Just for completeness, if you really want to use a device in above example as
VFIO passthru, the second step is not completely handled in userspace, it is
actually the guest
driver who will allocate and setup the proper hw resource which will later
On 07/05/2016 01:16 PM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 12:02:42PM +0800, Xiao Guangrong wrote:
On 07/05/2016 09:35 AM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote:
On 07/04/2016 11:33 PM, Neo Jia wrote:
Sorry, I think I misread
On 07/05/2016 01:16 PM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 12:02:42PM +0800, Xiao Guangrong wrote:
On 07/05/2016 09:35 AM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote:
On 07/04/2016 11:33 PM, Neo Jia wrote:
Sorry, I think I misread
On 07/05/2016 09:35 AM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote:
On 07/04/2016 11:33 PM, Neo Jia wrote:
Sorry, I think I misread the "allocation" as "mapping". We only delay the
cpu mapping, not the allocation.
So how to under
On 07/05/2016 09:35 AM, Neo Jia wrote:
On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote:
On 07/04/2016 11:33 PM, Neo Jia wrote:
Sorry, I think I misread the "allocation" as "mapping". We only delay the
cpu mapping, not the allocation.
So how to under
On 07/04/2016 11:33 PM, Neo Jia wrote:
Sorry, I think I misread the "allocation" as "mapping". We only delay the
cpu mapping, not the allocation.
So how to understand your statement:
"at that moment nobody has any knowledge about how the physical mmio gets
virtualized"
The resource,
On 07/04/2016 11:33 PM, Neo Jia wrote:
Sorry, I think I misread the "allocation" as "mapping". We only delay the
cpu mapping, not the allocation.
So how to understand your statement:
"at that moment nobody has any knowledge about how the physical mmio gets
virtualized"
The resource,
On 07/04/2016 05:16 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:45:05PM +0800, Xiao Guangrong wrote:
On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800
On 07/04/2016 05:16 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:45:05PM +0800, Xiao Guangrong wrote:
On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800
On 07/04/2016 04:45 PM, Xiao Guangrong wrote:
On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:03 PM, Neo Jia
1 - 100 of 2152 matches
Mail list logo