From: Sebastien Boeuf
On MMIO a new set of registers is defined for finding SHM
regions. Add their definitions and use them to find the region.
Signed-off-by: Sebastien Boeuf
Cc: k...@vger.kernel.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: "Michael S. Tsirkin"
---
to daemon (like we do for direct I/O path). This will keep write and
i_size change atomic w.r.t crash.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Dr. David Alan Gilbert
Signed-off-by: Vivek Goyal
Signed-off-by: Miklos Szeredi
Signed-off-by: Liu Bo
Signed-off-by: Peng Tao
Cc: Dave Chinner
.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Dr. David Alan Gilbert
Signed-off-by: Vivek Goyal
Signed-off-by: Sebastien Boeuf
Signed-off-by: Liu Bo
---
fs/fuse/virtio_fs.c| 139 +
include/uapi/linux/virtio_fs.h | 3 +
2 files changed, 142 insertions
Divide the dax memory range into fixed size ranges (2MB for now) and put
them in a list. This will track free ranges. Once an inode requires a
free range, we will take one from here and put it in interval-tree
of ranges assigned to inode.
Signed-off-by: Vivek Goyal
Signed-off-by: Peng Tao
From: Stefan Hajnoczi
Add DAX mmap() support.
Signed-off-by: Stefan Hajnoczi
---
fs/fuse/file.c | 62 +-
1 file changed, 61 insertions(+), 1 deletion(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 99457d0b14b9..f1ad8b95b546 100644
---
e look for a free range in following order.
A. Try to get a free range.
B. If not, try direct reclaim.
C. If not, wait for a memory range to become free
Signed-off-by: Vivek Goyal
Signed-off-by: Liu Bo
---
fs/fuse/file.c | 482 +++-
fs/fuse/fuse_i.h
lusive and avoid all the above problems.
Signed-off-by: Vivek Goyal
Cc: Dave Chinner
---
fs/fuse/dir.c| 32 ++-
fs/fuse/file.c | 81 +---
fs/fuse/fuse_i.h | 9 ++
fs/fuse/inode.c | 1 +
4 files changed, 112 insertions(
This is done along the lines of ext4 and xfs. I primarily wanted ->writepages
hook at this time so that I could call into dax_writeback_mapping_range().
This in turn will decide which pfns need to be written back.
Signed-off-by: Vivek Goyal
---
fs/fuse/file.c | 21 -
1 f
From: Sebastien Boeuf
On PCI the shm regions are found using capability entries;
find a region by searching for the capability.
Signed-off-by: Sebastien Boeuf
Signed-off-by: Dr. David Alan Gilbert
Signed-off-by: kbuild test robot
Acked-by: Michael S. Tsirkin
Cc: k...@vger.kernel.org
Cc:
map support
Vivek Goyal (13):
dax: Modify bdev_dax_pgoff() to handle NULL bdev
dax: Create a range version of dax_layout_busy_page()
virtiofs: Provide a helper function for virtqueue initialization
fuse: Get rid of no_mount_options
fuse,virtiofs: Add a mount option to enable dax
f
From: Sebastien Boeuf
Virtio defines 'shared memory regions' that provide a continuously
shared region between the host and guest.
Provide a method to find a particular region on a device.
Signed-off-by: Sebastien Boeuf
Signed-off-by: Dr. David Alan Gilbert
Acked-by: Michael S. Tsirkin
Cc:
Introduce two new fuse commands to setup/remove memory mappings. This
will be used to setup/tear down file mapping in dax window.
Signed-off-by: Vivek Goyal
Signed-off-by: Peng Tao
---
include/uapi/linux/fuse.h | 29 +
1 file changed, 29 insertions(+)
diff --git
this issue properly. So to make
progress, it seems this patch is least bad option for now and I hope
we can take it.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Vivek Goyal
Reviewed-by: Jan Kara
Cc: Christoph Hellwig
Cc: Dan Williams
Cc: Jan Kara
Cc: Vishal L Verma
Cc: "Weiny, Ira"
On Thu, Aug 13, 2020 at 07:51:56PM -0700, Gurchetan Singh wrote:
> On Mon, Aug 10, 2020 at 7:50 AM Vivek Goyal wrote:
>
> > On Mon, Aug 10, 2020 at 10:05:17AM -0400, Michael S. Tsirkin wrote:
> > > On Fri, Aug 07, 2020 at 03:55:10PM -0400, Vivek Goyal wrote:
> >
On Mon, Aug 17, 2020 at 06:53:39PM +0200, Jan Kara wrote:
> On Fri 07-08-20 15:55:08, Vivek Goyal wrote:
> > virtiofs device has a range of memory which is mapped into file inodes
> > using dax. This memory is mapped in qemu on host and maps different
> > sections of re
On Wed, Aug 12, 2020 at 11:23:45AM +1000, Dave Chinner wrote:
> On Tue, Aug 11, 2020 at 01:55:30PM -0400, Vivek Goyal wrote:
> > On Tue, Aug 11, 2020 at 08:22:38AM +1000, Dave Chinner wrote:
> > > On Fri, Aug 07, 2020 at 03:55:21PM -0400, Vivek Goyal wrote:
> > > >
On Tue, Aug 11, 2020 at 08:22:38AM +1000, Dave Chinner wrote:
> On Fri, Aug 07, 2020 at 03:55:21PM -0400, Vivek Goyal wrote:
> > We need some kind of locking mechanism here. Normal file systems like
> > ext4 and xfs seems to take their own semaphore to protect agains
> >
On Tue, Aug 11, 2020 at 08:06:55AM +1000, Dave Chinner wrote:
> On Fri, Aug 07, 2020 at 03:55:19PM -0400, Vivek Goyal wrote:
> > This patch implements basic DAX support. mmap() is not implemented
> > yet and will come in later patches. This patch looks into implemeting
On Mon, Aug 10, 2020 at 10:29:13AM +0200, Miklos Szeredi wrote:
> On Fri, Aug 7, 2020 at 9:55 PM Vivek Goyal wrote:
> >
> > fuse_file_put(sync) can be called with sync=true/false. If sync=true,
> > it waits for release request response and then calls iput() in the
> >
On Mon, Aug 10, 2020 at 10:29:13AM +0200, Miklos Szeredi wrote:
> On Fri, Aug 7, 2020 at 9:55 PM Vivek Goyal wrote:
> >
> > fuse_file_put(sync) can be called with sync=true/false. If sync=true,
> > it waits for release request response and then calls iput() in the
> >
On Mon, Aug 10, 2020 at 10:05:17AM -0400, Michael S. Tsirkin wrote:
> On Fri, Aug 07, 2020 at 03:55:10PM -0400, Vivek Goyal wrote:
> > From: Sebastien Boeuf
> >
> > On PCI the shm regions are found using capability entries;
> > find a region by searching for the capa
On Mon, Aug 10, 2020 at 09:47:15AM -0400, Michael S. Tsirkin wrote:
> On Fri, Aug 07, 2020 at 03:55:09PM -0400, Vivek Goyal wrote:
> > From: Sebastien Boeuf
> >
> > Virtio defines 'shared memory regions' that provide a continuously
> > shared region between the host
On Mon, Aug 10, 2020 at 09:29:47AM +0200, Miklos Szeredi wrote:
> On Fri, Aug 7, 2020 at 9:55 PM Vivek Goyal wrote:
> >
>
> > Most of the changes are limited to fuse/virtiofs. There are couple
> > of changes needed in generic dax infrastructure and couple of changes
of this function named
dax_layout_busy_page_range() which can be used to pass a range which
needs to be unmapped.
Cc: Dan Williams
Cc: linux-nvd...@lists.01.org
Signed-off-by: Vivek Goyal
---
fs/dax.c| 66 -
include/linux/dax.h | 6 +
2
-by: Vivek Goyal
---
fs/fuse/file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 605976a586c2..f103355bf71f 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -467,7 +467,7 @@ int fuse_open_common(struct inode *inode, struct file
*file
where the
host page sizes are different.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Vivek Goyal
---
fs/fuse/fuse_i.h | 5 -
fs/fuse/inode.c | 19 +--
include/uapi/linux/fuse.h | 4 +++-
3 files changed, 24 insertions(+), 4 deletions(-)
diff --git a/fs
: Implement get_shm_region for MMIO transport
Stefan Hajnoczi (2):
virtio_fs, dax: Set up virtio_fs dax_device
fuse,dax: add DAX mmap support
Vivek Goyal (15):
dax: Modify bdev_dax_pgoff() to handle NULL bdev
dax: Create a range version of dax_layout_busy_page()
virtiofs: Provide
From: Sebastien Boeuf
On MMIO a new set of registers is defined for finding SHM
regions. Add their definitions and use them to find the region.
Signed-off-by: Sebastien Boeuf
Cc: k...@vger.kernel.org
Cc: "Michael S. Tsirkin"
---
drivers/virtio/virtio_mmio.c | 32
tions does not work anymore. What
we need is a per mount option specific flag so that fileystem can
specify which options to show.
Add few such flags to control the behavior in more fine grained manner
and get rid of no_mount_options.
Signed-off-by: Vivek Goyal
---
fs/fuse/fuse_i.h| 14
From: Sebastien Boeuf
On PCI the shm regions are found using capability entries;
find a region by searching for the capability.
Signed-off-by: Sebastien Boeuf
Signed-off-by: Dr. David Alan Gilbert
Signed-off-by: kbuild test robot
Cc: k...@vger.kernel.org
Cc: "Michael S. Tsirkin"
---
From: Sebastien Boeuf
Virtio defines 'shared memory regions' that provide a continuously
shared region between the host and guest.
Provide a method to find a particular region on a device.
Signed-off-by: Sebastien Boeuf
Signed-off-by: Dr. David Alan Gilbert
Cc: k...@vger.kernel.org
Cc:
Add a mount option to allow using dax with virtio_fs.
Signed-off-by: Vivek Goyal
---
fs/fuse/fuse_i.h| 7
fs/fuse/inode.c | 3 ++
fs/fuse/virtio_fs.c | 82 +
3 files changed, 78 insertions(+), 14 deletions(-)
diff --git a/fs/fuse
This reduces code duplication and make it little easier to read code.
Signed-off-by: Vivek Goyal
---
fs/fuse/virtio_fs.c | 50 +++--
1 file changed, 30 insertions(+), 20 deletions(-)
diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
index
e look for a free range in following order.
A. Try to get a free range.
B. If not, try direct reclaim.
C. If not, wait for a memory range to become free
Signed-off-by: Vivek Goyal
Signed-off-by: Liu Bo
---
fs/fuse/file.c | 525 +++-
fs/fuse/fuse_i.h
to daemon (like we do for direct I/O path). This will keep write and
i_size change atomic w.r.t crash.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Dr. David Alan Gilbert
Signed-off-by: Vivek Goyal
Signed-off-by: Miklos Szeredi
Signed-off-by: Liu Bo
Signed-off-by: Peng Tao
---
fs/fuse/file.c
This is done along the lines of ext4 and xfs. I primarily wanted ->writepages
hook at this time so that I could call into dax_writeback_mapping_range().
This in turn will decide which pfns need to be written back.
Signed-off-by: Vivek Goyal
---
fs/fuse/file.c | 21 -
1 f
this issue properly. So to make
progress, it seems this patch is least bad option for now and I hope
we can take it.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Vivek Goyal
Cc: Christoph Hellwig
Cc: Dan Williams
Cc: linux-nvd...@lists.01.org
---
drivers/dax/super.c | 3 ++-
1 file changed, 2
about circular dependencies. So define a new fuse_inode->i_mmap_sem.
Signed-off-by: Vivek Goyal
---
fs/fuse/dir.c| 2 ++
fs/fuse/file.c | 15 ---
fs/fuse/fuse_i.h | 7 +++
fs/fuse/inode.c | 1 +
4 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/fs/fuse/dir.
Introduce two new fuse commands to setup/remove memory mappings. This
will be used to setup/tear down file mapping in dax window.
Signed-off-by: Vivek Goyal
Signed-off-by: Peng Tao
---
include/uapi/linux/fuse.h | 29 +
1 file changed, 29 insertions(+)
diff --git
g fuse replies from daemon on the host).
That means it blocks worker thread and it stops processing further
replies and system deadlocks.
So for now, force sync release of file in case of DAX inodes.
Signed-off-by: Vivek Goyal
---
fs/fuse/file.c | 14 +-
1 file changed, 13 insertions
Divide the dax memory range into fixed size ranges (2MB for now) and put
them in a list. This will track free ranges. Once an inode requires a
free range, we will take one from here and put it in interval-tree
of ranges assigned to inode.
Signed-off-by: Vivek Goyal
Signed-off-by: Peng Tao
This list will be used selecting fuse_dax_mapping to free when number of
free mappings drops below a threshold.
Signed-off-by: Vivek Goyal
---
fs/fuse/file.c | 22 ++
fs/fuse/fuse_i.h | 7 +++
fs/fuse/inode.c | 4
3 files changed, 33 insertions(+)
diff --git
From: Stefan Hajnoczi
Add DAX mmap() support.
Signed-off-by: Stefan Hajnoczi
---
fs/fuse/file.c | 62 +-
1 file changed, 61 insertions(+), 1 deletion(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 194fe3e404a7..be7d90eb5b41 100644
---
.
Signed-off-by: Stefan Hajnoczi
Signed-off-by: Dr. David Alan Gilbert
Signed-off-by: Vivek Goyal
Signed-off-by: Sebastien Boeuf
Signed-off-by: Liu Bo
---
fs/fuse/virtio_fs.c| 139 +
include/uapi/linux/virtio_fs.h | 3 +
2 files changed, 142 insertions
On Mon, Jul 20, 2020 at 05:13:59PM -0400, Vivek Goyal wrote:
> Page fault error handling behavior in kvm seems little inconsistent when
> page fault reports error. If we are doing fault synchronously
> then we capture error (-EFAULT) returned by __gfn_to_pfn_memslot() and
> exit t
On Thu, Aug 06, 2020 at 02:04:18PM +0800, kernel test robot wrote:
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
> master
> head: fffe3ae0ee84e25d2befe2ae59bc32aa2b6bc77b
> commit: a62a8ef9d97da23762a588592c8b8eb50a8deb6a virtio-fs: add virtiofs
> filesystem
>
On Wed, Aug 05, 2020 at 09:44:39AM -0400, Michael S. Tsirkin wrote:
> Virtio fs is modern-only. Use LE accessors for config space.
>
> Signed-off-by: Michael S. Tsirkin
Acked-by: Vivek Goyal
Vivek
> ---
> fs/fuse/virtio_fs.c | 4 ++--
> 1 file changed, 2 insertio
On Mon, Aug 03, 2020 at 04:59:13PM -0400, Michael S. Tsirkin wrote:
> Since fs is a modern-only device,
> tag config space fields as having little endian-ness.
>
> Signed-off-by: Michael S. Tsirkin
virtio spec does list this field as "le32".
Acked-by: Vivek Goyal
Vivek
On Mon, Jul 27, 2020 at 06:09:32PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, Jul 20, 2020 at 05:13:59PM -0400, Vivek Goyal wrote:
> >> Page fault error handling behavior in kvm seems little inconsistent when
> >> page fault report
On Mon, Jul 20, 2020 at 05:13:59PM -0400, Vivek Goyal wrote:
> Page fault error handling behavior in kvm seems little inconsistent when
> page fault reports error. If we are doing fault synchronously
> then we capture error (-EFAULT) returned by __gfn_to_pfn_memslot() and
> exit t
a warning by converting kvm_find_error_gfn() static.
Change from v1:
- Maintain a cache of error gfns, instead of single gfn. (Vitaly)
Signed-off-by: Vivek Goyal
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86
On Fri, Jul 17, 2020 at 12:14:00PM +0200, Vitaly Kuznetsov wrote:
[..]
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 6d6a0ae7800c..a0e6283e872d 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -4078,7 +4078,7 @@ static bool
On Thu, Jul 09, 2020 at 08:54:42AM -0400, Vivek Goyal wrote:
> Page fault error handling behavior in kvm seems little inconsistent when
> page fault reports error. If we are doing fault synchronously
> then we capture error (-EFAULT) returned by __gfn_to_pfn_memslot() and
> exit t
will force sync fault and exit to user space.
Change from v2:
- Fixed a warning by converting kvm_find_error_gfn() static.
Change from v1:
- Maintain a cache of error gfns, instead of single gfn. (Vitaly)
Signed-off-by: Vivek Goyal
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86
will force sync fault and exit to user space.
Change from v1:
- Maintain a cache of error gfns, instead of single gfn. (Vitaly)
Signed-off-by: Vivek Goyal
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86
On Tue, Jun 30, 2020 at 05:43:54PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Tue, Jun 30, 2020 at 05:13:54PM +0200, Vitaly Kuznetsov wrote:
> >>
> >> > - If you retry in kernel, we will change the context completely that
> >> >
On Tue, Jun 30, 2020 at 05:13:54PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Tue, Jun 30, 2020 at 03:24:43PM +0200, Vitaly Kuznetsov wrote:
>
> >>
> >> It's probably me who's missing something important here :-) but I think
> >>
On Tue, Jun 30, 2020 at 03:24:43PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, Jun 29, 2020 at 10:56:25PM +0200, Vitaly Kuznetsov wrote:
> >> Vivek Goyal writes:
> >>
> >> > On Fri, Jun 26, 2020 a
On Mon, Jun 29, 2020 at 10:56:25PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Fri, Jun 26, 2020 at 11:25:19AM +0200, Vitaly Kuznetsov wrote:
> >
> > [..]
> >> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> >&
On Thu, Jun 25, 2020 at 12:02:53PM +0300, Vasily Averin wrote:
> In current implementation fuse_writepages_fill() tries to share the code:
> for new wpa it calls tree_insert() with num_pages = 0
> then switches to common code used non-modified num_pages
> and increments it at the very end.
>
>
On Fri, Jun 26, 2020 at 11:25:19AM +0200, Vitaly Kuznetsov wrote:
[..]
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 76817d13c86e..a882a6a9f7a7 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -4078,7 +4078,7 @@ static bool
help ease the issue.
Signed-off-by: Vivek Goyal
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/x86.c | 14 +++---
4 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x
On Wed, Jun 17, 2020 at 04:05:48PM -0700, Sean Christopherson wrote:
> On Wed, Jun 17, 2020 at 04:00:52PM -0700, Sean Christopherson wrote:
> > On Wed, Jun 17, 2020 at 05:51:52PM -0400, Vivek Goyal wrote:
> > What I'm saying is that KVM cannot do the filtering. KVM, by design, do
On Wed, Jun 17, 2020 at 11:32:24AM -0700, Sean Christopherson wrote:
> On Wed, Jun 17, 2020 at 03:12:03PM +0200, Vitaly Kuznetsov wrote:
> > Vivek Goyal writes:
> >
> > > As of now asynchronous page fault mecahanism assumes host will always be
> > > succe
On Wed, Jun 17, 2020 at 03:12:03PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > As of now asynchronous page fault mecahanism assumes host will always be
> > successful in resolving page fault. So there are only two states, that
> > is page is not p
On Wed, Jun 17, 2020 at 03:02:10PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > Page fault error handling behavior in kvm seems little inconsistent when
> > page fault reports error. If we are doing fault synchronously
> > then we capture
or not.
Any feedback or comments are welcome.
Thanks
Vivek
Vivek Goyal (3):
kvm,x86: Force sync fault if previous attempts failed
kvm: Add capability to be able to report async pf error to guest
kvm, async_pf: Use FOLL_WRITE only for write faults
Documentation/virt/kvm/cpuid.rst | 4
SIGBUS to guest process or do exception table handling or possibly
die).
Hence, we don't want to get -EFAULT erroneously. Pass FOLL_WRITE only
if it is write fault.
Signed-off-by: Vivek Goyal
---
arch/x86/kvm/mmu/mmu.c | 7 ---
include/linux/kvm_host.h | 4 +++-
virt/kvm/async_pf.c | 9
d maintain an array of error
gfn later to help ease the issue.
Signed-off-by: Vivek Goyal
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/x86.c | 19 +--
include/linux/kvm_host.h
-by: Vivek Goyal
---
Documentation/virt/kvm/cpuid.rst | 4 +++
Documentation/virt/kvm/msr.rst | 10 +---
arch/x86/include/asm/kvm_host.h | 3 +++
arch/x86/include/asm/kvm_para.h | 8 +++---
arch/x86/include/uapi/asm/kvm_para.h | 10 ++--
arch/x86/kernel/kvm.c
oint of time,
and they fall back to synchorounous page fault upon failure. This
could be changed back once somebody needs specific error code.
Acked-by: Vivek Goyal
Vivek
> ---
> arch/s390/kvm/kvm-s390.c | 20 +---
> arch/x86/kvm/mmu/mmu.c | 4 ++--
> include
On Mon, May 25, 2020 at 04:41:23PM +0200, Vitaly Kuznetsov wrote:
> KVM now supports using interrupt for 'page ready' APF event delivery and
> legacy mechanism was deprecated. Switch KVM guests to the new one.
Hi Vitaly,
I see we have all this code in guest which tries to take care of
cases
On Wed, Jun 10, 2020 at 12:47:38PM -0700, Sean Christopherson wrote:
> On Wed, Jun 10, 2020 at 03:32:11PM -0400, Vivek Goyal wrote:
> > On Wed, Jun 10, 2020 at 07:55:32PM +0200, Vitaly Kuznetsov wrote:
> > > 'Page not present' event may or may not get injected depending on
&
to always be able to inject 'page not present', the
> change is effectively a nop.
>
> Suggested-by: Vivek Goyal
> Signed-off-by: Vitaly Kuznetsov
> ---
> arch/s390/include/asm/kvm_host.h | 2 +-
> arch/s390/kvm/kvm-s390.c | 4 +++-
> arch/x86/include/asm/k
On Wed, Jun 10, 2020 at 11:01:39AM +0200, Vitaly Kuznetsov wrote:
> Paolo Bonzini writes:
>
> > On 09/06/20 21:10, Vivek Goyal wrote:
> >> Hi Vitaly,
> >>
> >> Have a question about page ready events.
> >>
> >> Now we deliver PAGE_NO
On Mon, May 25, 2020 at 04:41:20PM +0200, Vitaly Kuznetsov wrote:
[..]
> void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
>struct kvm_async_pf *work)
> {
> - struct x86_exception fault;
> + struct kvm_lapic_irq irq = {
> + .delivery_mode
r all. The Grand Plan is to switch to using e.g. #VE for 'page
> > not present' events and normal APIC interrupts for 'page ready' events.
> > This series does the later.
> >
> > Changes since v1:
> > - struct kvm_vcpu_pv_apf_data's fields renamed to 'flags' and 'token'
On Wed, Jun 03, 2020 at 07:47:14PM +0200, gli...@google.com wrote:
> Under certain circumstances (we found this out running Docker on a
> Clang-built kernel with CONFIG_INIT_STACK_ALL) ovl_copy_xattr() may
> return uninitialized value of |error| from ovl_copy_xattr().
If we are returning
On Thu, May 28, 2020 at 10:42:38AM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, May 25, 2020 at 04:41:17PM +0200, Vitaly Kuznetsov wrote:
> >>
> >
> > [..]
> >> diff --git a/arch/x86/include/asm/kvm_host.h
> >> b/arch
On Mon, May 25, 2020 at 04:41:17PM +0200, Vitaly Kuznetsov wrote:
>
[..]
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 0a6b35353fc7..c195f63c1086 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -767,7 +767,7
On Sat, May 23, 2020 at 06:34:17PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, May 11, 2020 at 06:47:46PM +0200, Vitaly Kuznetsov wrote:
> >> Currently, APF mechanism relies on the #PF abuse where the token is being
> >> passed through CR2. If
On Mon, May 11, 2020 at 06:47:46PM +0200, Vitaly Kuznetsov wrote:
> Currently, APF mechanism relies on the #PF abuse where the token is being
> passed through CR2. If we switch to using interrupts to deliver page-ready
> notifications we need a different way to pass the data. Extent the existing
>
On Wed, Apr 08, 2020 at 12:07:22AM +0200, Paolo Bonzini wrote:
> On 07/04/20 23:41, Andy Lutomirski wrote:
> > 2. Access to bad memory results in #MC. Sure, #MC is a turd, but
> > it’s an *architectural* turd. By all means, have a nice simple PV
> > mechanism to tell the #MC code exactly what
On Tue, May 19, 2020 at 03:12:42PM -0700, Dan Williams wrote:
> The original copy_mc_fragile() implementation had negative performance
> implications since it did not use the fast-string instruction sequence
> to perform copies. For this reason copy_mc_to_kernel() fell back to
> plain memcpy() to
On Fri, May 15, 2020 at 09:18:07PM +0200, Paolo Bonzini wrote:
> On 15/05/20 20:46, Sean Christopherson wrote:
> >> The new one using #VE is not coming very soon (we need to emulate it for
> >> >> going to keep "page not ready" delivery using #PF for some time or even
> >> forever. However, page
On Thu, May 14, 2020 at 10:08:37AM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Wed, May 13, 2020 at 04:23:55PM +0200, Vitaly Kuznetsov wrote:
> >
> > [..]
> >> >> Also,
> >> >> kdump kernel may not even support APF so
On Wed, May 13, 2020 at 04:23:55PM +0200, Vitaly Kuznetsov wrote:
[..]
> >> Also,
> >> kdump kernel may not even support APF so it will get very confused when
> >> APF events get delivered.
> >
> > New kernel can just ignore these events if it does not support async
> > pf?
> >
> > This is
On Mon, May 11, 2020 at 06:47:44PM +0200, Vitaly Kuznetsov wrote:
> Concerns were expressed around (ab)using #PF for KVM's async_pf mechanism,
> it seems that re-using #PF exception for a PV mechanism wasn't a great
> idea after all. The Grand Plan is to switch to using e.g. #VE for 'page
> not
On Wed, May 13, 2020 at 09:53:50AM -0400, Vivek Goyal wrote:
[..]
> > > And this notion of same structure being shared across multiple events
> > > at the same time is just going to create more confusion, IMHO. If we
> > > can decouple it by serializing it, th
On Wed, May 13, 2020 at 11:03:48AM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Tue, May 12, 2020 at 05:50:53PM +0200, Vitaly Kuznetsov wrote:
> >> Vivek Goyal writes:
> >>
> >> >
> >> > So if we are using a common st
On Tue, May 12, 2020 at 10:50:17AM -0700, Sean Christopherson wrote:
> On Tue, May 12, 2020 at 11:53:39AM -0400, Vivek Goyal wrote:
> > On Tue, May 12, 2020 at 05:40:10PM +0200, Vitaly Kuznetsov wrote:
> > > Vivek Goyal writes:
> > >
> > > > On Mon, M
On Tue, May 12, 2020 at 05:40:10PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, May 11, 2020 at 06:47:46PM +0200, Vitaly Kuznetsov wrote:
> >> Currently, APF mechanism relies on the #PF abuse where the token is being
> >> passed through CR2. If
On Tue, May 12, 2020 at 05:50:53PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, May 11, 2020 at 06:47:48PM +0200, Vitaly Kuznetsov wrote:
> >> Concerns were expressed around APF delivery via synthetic #PF exception as
> >> in some cases such
On Tue, May 12, 2020 at 05:40:10PM +0200, Vitaly Kuznetsov wrote:
> Vivek Goyal writes:
>
> > On Mon, May 11, 2020 at 06:47:46PM +0200, Vitaly Kuznetsov wrote:
> >> Currently, APF mechanism relies on the #PF abuse where the token is being
> >> passed through CR2. If
Hi Vitaly,
Are there any corresponding qemu patches as well to enable new
functionality. Wanted to test it.
Thanks
Vivek
On Mon, May 11, 2020 at 06:47:44PM +0200, Vitaly Kuznetsov wrote:
> Concerns were expressed around (ab)using #PF for KVM's async_pf mechanism,
> it seems that re-using #PF
On Mon, May 11, 2020 at 06:47:46PM +0200, Vitaly Kuznetsov wrote:
> Currently, APF mechanism relies on the #PF abuse where the token is being
> passed through CR2. If we switch to using interrupts to deliver page-ready
> notifications we need a different way to pass the data. Extent the existing
>
On Mon, May 11, 2020 at 06:47:48PM +0200, Vitaly Kuznetsov wrote:
> Concerns were expressed around APF delivery via synthetic #PF exception as
> in some cases such delivery may collide with real page fault. For type 2
> (page ready) notifications we can easily switch to using an interrupt
>
On Wed, May 06, 2020 at 05:17:57PM +0200, Vitaly Kuznetsov wrote:
[..]
> >
> > So either we need a way to report errors back while doing synchrounous
> > page faults or we can't fall back to synchorounous page faults while
> > async page faults are enabled.
> >
> > While we are reworking async
On Thu, Apr 30, 2020 at 06:21:45PM -0700, Dan Williams wrote:
> On Thu, Apr 30, 2020 at 5:10 PM Linus Torvalds
> wrote:
> >
> > On Thu, Apr 30, 2020 at 4:52 PM Dan Williams
> > wrote:
> > >
> > > You had me until here. Up to this point I was grokking that Andy's
> > > "_fallible" suggestion
On Wed, Apr 29, 2020 at 12:53:33PM +0200, Paolo Bonzini wrote:
> On 29/04/20 11:36, Vitaly Kuznetsov wrote:
> > +
> > + if (__this_cpu_read(apf_reason.enabled)) {
> > + reason = __this_cpu_read(apf_reason.reason);
> > + if (reason == KVM_PV_REASON_PAGE_READY) {
> > +
101 - 200 of 3703 matches
Mail list logo