On Thu 12-11-20 17:38:36, Maxim Levitsky wrote:
> On Thu, 2020-11-12 at 12:19 +0100, Jan Kara wrote:
> > [added some relevant people and lists to CC]
> >
> > On Wed 11-11-20 17:44:05, Maxim Levitsky wrote:
> > > On Wed, 2020-11-11 at 17:39 +0200, Maxim
On Thu 12-11-20 12:19:51, Jan Kara wrote:
> [added some relevant people and lists to CC]
>
> On Wed 11-11-20 17:44:05, Maxim Levitsky wrote:
> > On Wed, 2020-11-11 at 17:39 +0200, Maxim Levitsky wrote:
> > > clone of "starship_production"
> >
> &
thing happened to the page cache
outside of discarded range (like you describe above), that is a kernel bug
than needs to get fixed. EBUSY should really mean - someone wrote to the
discarded range while discard was running and userspace app has to deal
with that depending on what it aims to do...
Honza
--
Jan Kara
SUSE Labs, CR
f-by: Pankaj Gupta
The patch looks good to me. You can add:
Reviewed-by: Jan Kara
Honza
> ---
> fs/ext4/file.c | 11 ++-
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/fs/ext
not not support VM_SYNC.
>
> Suggested-by: Jan Kara
> Signed-off-by: Pankaj Gupta
The patch looks good to me. You can add:
Reviewed-by: Jan Kara
Honza
> ---
> include/linux/dax.h | 17 +
&g
LL mean generic_nvdimm_flush(). Just so that people don't
get confused by the code.
Honza
--
Jan Kara
SUSE Labs, CR
Is there a need to define dax_synchronous() for !CONFIG_DAX? Because that
property of dax device is pretty much undefined and I don't see anything
needing to call it for !CONFIG_DAX...
Honza
--
Jan Kara
SUSE Labs, CR
not not support VM_SYNC.
>
> Suggested-by: Jan Kara
> Signed-off-by: Pankaj Gupta
> ---
> include/linux/dax.h | 23 +++
> 1 file changed, 23 insertions(+)
>
> diff --git a/include/linux/dax.h b/include/linux/dax.h
> index b896706a5ee9..4a2a60ffec86
> */
> - if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC))
> - return -EOPNOTSUPP;
> + if (vma->vm_flags & VM_SYNC) {
> + int err = is_synchronous(filp, dax_dev);
> + if (err)
> +
On Wed 03-04-19 16:10:17, Pankaj Gupta wrote:
> Virtio pmem provides asynchronous host page cache flush
> mechanism. We don't support 'MAP_SYNC' with virtio pmem
> and ext4.
>
> Signed-off-by: Pankaj Gupta
The patch looks good to me. You can add:
Review
So can I imagine this as guest mmaping the host file and
providing the mapped range as "NVDIMM pages" to the kernel inside the
guest? Or is it more complex?
Honza
--
Jan Kara
SUSE Labs, CR
e the page cache.
Right. Thinking about this I would be more concerned about the fact that
guest can effectively pin amount of host's page cache upto size of the
device/file passed to guest as PMEM, can't it Pankaj? Or is there some QEMU
magic that avoids this?
Honza
--
Jan Kara
SUSE Labs, CR
way of doing this? Having
virtio_pmem_host_cache_enabled() check in filesystem code just looks like
filesystem sniffing into details is should not care about... Maybe just
naming this (or having a wrapper) dax_dev_map_sync_supported()?
Honza
--
Jan Kara
SUSE Labs, CR
On Mon 24-07-17 08:10:05, Dan Williams wrote:
> On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara <j...@suse.cz> wrote:
> > On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
> >>
> >> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> >> > > On Sun, Jul
image file at that moment - in fact
> > you must do that for metadata IO to hit persistent storage anyway in your
> > setting. This would very closely follow how exporting block devices with
> > volatile cache works with KVM these days AFAIU and the performance will be
> > the same.
>
> yes 'blkdev_issue_flush' does set 'REQ_OP_WRITE | REQ_PREFLUSH' flags.
> As per suggestions looks like block flushing device is way ahead.
>
> If we do an asynchronous block flush at guest side(put current task in
> wait queue till host side fdatasync completes) can solve the purpose? Or
> do we need another paravirt device for this?
Well, even currently if you have PMEM device, you still have also a block
device and a request queue associated with it and metadata IO goes through
that path. So in your case you will have the same in the guest as a result
of exposing virtual PMEM device to the guest and you just need to make sure
this virtual block device behaves the same way as traditional virtualized
block devices in KVM in respose to 'REQ_OP_WRITE | REQ_PREFLUSH' requests.
Honza
--
Jan Kara <j...@suse.com>
SUSE Labs, CR
hat you could do instead is to completely ignore ->flush calls for the
PMEM device and instead catch the bio with REQ_PREFLUSH flag set on the
PMEM device (generated by blkdev_issue_flush() or the journalling
machinery) and fdatasync() the whole image file at that moment - in fact
you must do that for metadata IO to hit persistent storage anyway in your
setting. This would very closely follow how exporting block devices with
volatile cache works with KVM these days AFAIU and the performance will be
the same.
Honza
--
Jan Kara <j...@suse.com>
SUSE Labs, CR
On Thu 08-09-16 14:47:08, Ross Zwisler wrote:
> On Tue, Sep 06, 2016 at 05:06:20PM +0200, Jan Kara wrote:
> > On Thu 01-09-16 20:57:38, Ross Zwisler wrote:
> > > On Wed, Aug 31, 2016 at 04:44:47PM +0800, Xiao Guangrong wrote:
> > > > On 08/31/201
that
> the tools in CentOS 6 are so old that it's not worth worrying about. For
> reference, the kernel in CentOS 6 is based on 2.6.32. :) DAX was introduced
> in v4.0.
Hum, can you post 'dumpe2fs -h /dev/pmem0' output from that system when the
md5sum fails? Because the only idea I have is that mkfs.ext4 in CentOS 6
creates the filesystem with a different set of features than more recent
e2fsprogs and so we hit some untested path...
Honza
--
Jan Kara <j...@suse.com>
SUSE Labs, CR
ng
the fs after the problem happens? And then run e2fsck on the problematic
filesystem and send the output here?
Honza
--
Jan Kara <j...@suse.com>
SUSE Labs, CR
over some period,
try increasing number of threads in the next period and if it
helps significantly, use larger number, otherwise go back to a
smaller number?
Honza
--
Jan Kara j...@suse.cz
SUSE Labs, CR
On Tue 20-07-10 17:41:33, Michael Tokarev wrote:
20.07.2010 16:46, Jan Kara wrote:
Hi,
On Fri 02-07-10 16:46:28, Michael Tokarev wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I noticed that qcow2 images, esp. fresh ones (so that they
receive lots of metadata updates) are very
21 matches
Mail list logo