On Mon, Sep 27, 2021 at 11:32:01AM -0400, Vivek Goyal wrote:
> On Mon, Sep 27, 2021 at 10:21:48AM +1000, Dave Chinner wrote:
> > On Thu, Sep 23, 2021 at 09:02:26PM -0400, Vivek Goyal wrote:
> > > In summary, there seem to be two use cases.
> > >
> > > A. vi
On Thu, Sep 23, 2021 at 09:02:26PM -0400, Vivek Goyal wrote:
> On Fri, Sep 24, 2021 at 08:26:18AM +1000, Dave Chinner wrote:
> > On Thu, Sep 23, 2021 at 03:02:41PM -0400, Vivek Goyal wrote:
> > > On Thu, Sep 23, 2021 at 05:25:23PM +0800, Jeffle Xu wrote:
> > >
hard to make the client dax=inode behaviour be
controlled by the server side without any special client side mount
modes.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtio-fs mailing list
Virtio-fs@redhat.com
https://listman.redhat.com/mailman/listinfo/virtio-fs
test
> +# which has been written with the assumption that user.* xattr
> +# will succeed on symlink and special files.
> +user_xattr_allowed && _notrun "Kernel allows user.* xattrs on symlinks and
> special files. Skipping this test. Run newer test instead.&qu
On Wed, Sep 01, 2021 at 07:07:34PM -0400, Felix Kuehling wrote:
> On 2021-09-01 6:03 p.m., Dave Chinner wrote:
> > On Wed, Sep 01, 2021 at 11:40:43AM -0400, Felix Kuehling wrote:
> > > Am 2021-09-01 um 4:29 a.m. schrieb Christoph Hellwig:
> > > > On Mon, Aug 30, 2
t to fix - just look at the historical mess that RDMA
to/from file backed and/or DAX pages has been.
So, really, from my perspective as a filesystem engineer, I want to
see an actual specification for how this new memory type is going to
interact with filesystem and the page cache so everyone has some
idea of how this is going to work and can point out how it doesn't
work before code that simply doesn't work is pushed out into
production systems and then merged
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Wed, Sep 01, 2021 at 07:07:34PM -0400, Felix Kuehling wrote:
> On 2021-09-01 6:03 p.m., Dave Chinner wrote:
> > On Wed, Sep 01, 2021 at 11:40:43AM -0400, Felix Kuehling wrote:
> > > Am 2021-09-01 um 4:29 a.m. schrieb Christoph Hellwig:
> > > > On Mon, Aug 30, 2
t to fix - just look at the historical mess that RDMA
to/from file backed and/or DAX pages has been.
So, really, from my perspective as a filesystem engineer, I want to
see an actual specification for how this new memory type is going to
interact with filesystem and the page cache so everyone has some
idea of how this is going to work and can point out how it doesn't
work before code that simply doesn't work is pushed out into
production systems and then merged
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ndle unexpected replies better") also makes mention of races with
timeout errors, and the above commit is touching the timeout error
handling.
Josef, this one looks like it is yours...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
J. Wong
> Signed-off-by: Darrick J. Wong
> ---
> fs/iomap/apply.c | 91
> -
> fs/iomap/trace.h | 40 --
> include/linux/iomap.h | 10 -
> 3 files changed, 141 deletions(-)
Looks good.
Re
ewed-by: Darrick J. Wong
> Signed-off-by: Darrick J. Wong
> ---
> fs/iomap/fiemap.c | 31 +--
> 1 file changed, 13 insertions(+), 18 deletions(-)
Looks good.
Reviewed-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
ng
> Signed-off-by: Darrick J. Wong
Looks like a straight translation of Christoph's original. Seems
fine to me as a simepl step towards preserving the git history.
Reviewed-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
/fs/iomap/apply.c b/fs/iomap/iter.c
> similarity index 100%
> rename from fs/iomap/apply.c
> rename to fs/iomap/iter.c
LGTM,
Reviewed-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
J. Wong
> Signed-off-by: Darrick J. Wong
> ---
> fs/iomap/apply.c | 91
> -
> fs/iomap/trace.h | 40 --
> include/linux/iomap.h | 10 -
> 3 files changed, 141 deletions(-)
Looks good.
Re
ewed-by: Darrick J. Wong
> Signed-off-by: Darrick J. Wong
> ---
> fs/iomap/fiemap.c | 31 +--
> 1 file changed, 13 insertions(+), 18 deletions(-)
Looks good.
Reviewed-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
/fs/iomap/apply.c b/fs/iomap/iter.c
> similarity index 100%
> rename from fs/iomap/apply.c
> rename to fs/iomap/iter.c
LGTM,
Reviewed-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
the older pre-disaggregation
fs/iomap.c without having to take the tree back in time to find
those files...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
the older pre-disaggregation
fs/iomap.c without having to take the tree back in time to find
those files...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
y changing when both are used in the same function.
Would it be better to avoid any possible confusion simply by using
"iomi" for all iomap_iter variables throughout the patchset from the
start? That way nobody is going to confuse iov_iter with iomap_iter
iteration variables and code that
_iterate() is fine as the function name - there's
no need for abbreviation here because it's not an overly long name.
This will makes it clearly different to the struct iomap_iter that
is passed to it and it will also make grep, cscope and other
code searching tools much more precise...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
_iterate() is fine as the function name - there's
no need for abbreviation here because it's not an overly long name.
This will makes it clearly different to the struct iomap_iter that
is passed to it and it will also make grep, cscope and other
code searching tools much more precise...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
y changing when both are used in the same function.
Would it be better to avoid any possible confusion simply by using
"iomi" for all iomap_iter variables throughout the patchset from the
start? That way nobody is going to confuse iov_iter with iomap_iter
iteration variables and code that
+ if (old_ma->fsx_projid != fa->fsx_projid &&
> + !projid_valid(make_kprojid(_user_ns, fa->fsx_projid)))
> + return -EINVAL;
> }
>
> /* Check extent size hints. */
Looks good. Thanks!
Reviewed-by
we had originally.
I don't think we want to go back to the unwritten allocation
behaviour - it sucked when it was first done because all DAX write
IO is synchronous, and it will still suck now because DAX writes are
still synchronous. What we really want to do here is copy the data
into the new extent before we commit the allocation transaction
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
e is valid.
*/
if (old_ma->fsx_projid != fa->fsx_projid &&
!projid_valid(make_kprojid(_user_ns, fa->fsx_projid)))
return -EINVAL;
}
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
tion are validated. Projids are
only allowed to be changed when current_user_ns() == _user_ns,
so this needs to be associated with that verification context.
This check should also use make_kprojid(), please, not open code
KPROJIDT_INIT.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
c struct page *dax_busy_page(void *entry)
> for_each_mapped_pfn(entry, pfn) {
> struct page *page = pfn_to_page(pfn);
>
> - if (page_ref_count(page) > 1)
> + if (!dax_layout_is_idle_page(page))
Here too.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
c struct page *dax_busy_page(void *entry)
> for_each_mapped_pfn(entry, pfn) {
> struct page *page = pfn_to_page(pfn);
>
> - if (page_ref_count(page) > 1)
> + if (!dax_layout_is_idle_page(page))
Here too.
Cheers,
Dave.
--
Dave Chinner
da.
Signed-off-by: Pavel Reichl
> Suggested-by: Dave Chinner
> Suggested-by: Eric Sandeen
> Suggested-by: Darrick J. Wong
> Reviewed-by: Darrick J. Wong
> Reviewed-by: Christoph Hellwig
> Signed-off-by: Jan Kara
> ---
> fs/xfs/xfs_inode.c | 39 +
validate_lock,
> + 0);
> + return rwsem_is_locked(_I(ip)->i_mapping->invalidate_lock);
> }
And so here we are again, losing more of our read vs write debug
checks on debug kernels when lockdep is not
On Wed, May 19, 2021 at 11:00:03AM +0300, Avi Kivity wrote:
>
> On 18/05/2021 02.22, Dave Chinner wrote:
> >
> > > What I'd like to do is remove the fanout directories, so that for each
> > > logical
> > > "volume"[*] I have a single director
On Fri, May 14, 2021 at 09:17:30AM -0700, Darrick J. Wong wrote:
> On Fri, May 14, 2021 at 09:19:45AM +1000, Dave Chinner wrote:
> > On Thu, May 13, 2021 at 11:52:52AM -0700, Darrick J. Wong wrote:
> > > On Thu, May 13, 2021 at 07:44:59PM +0200, Jan Kara wrote:
> > >
You still need to use
fanout directories if you want concurrency during modification for
the cachefiles index, but that's a different design criteria
compared to directory capacity and modification/lookup scalability.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
Linux-cachefs mailing list
Li
physical inode cluster buffers underlying the inodes in
the situation where they also need to be locked.
We've been down this path before more than a decade ago when the
powers that be decreed that inode locking order is to be "by
structure address" rather than inode number, because "
/lore.kernel.org/linux-fsdevel/20210208163918.7871-1-j...@suse.cz/
> Link: http://lore.kernel.org/r/20210413105205.3093-1-j...@suse.cz
>
> CC: ceph-de...@vger.kernel.org
> CC: Chao Yu
> CC: Damien Le Moal
> CC: "Darrick J. Wong"
&g
with the extent level manipulations and so user data
modifications cannot occur until the physical extent manipulation
operation has completed.
Having only just realised this is a problem, no solution has
immediately popped into my mind. I'll chew on it over the weekend,
but I'm not hopeful at this p
On Fri, Apr 16, 2021 at 10:14:39AM +0530, Bharata B Rao wrote:
> On Wed, Apr 07, 2021 at 08:28:07AM +1000, Dave Chinner wrote:
> > On Mon, Apr 05, 2021 at 11:18:48AM +0530, Bharata B Rao wrote:
> >
> > > As an alternative approach, I have this below hack that does lazy
On Wed, Apr 14, 2021 at 01:16:52AM -0600, Yu Zhao wrote:
> On Tue, Apr 13, 2021 at 10:50 PM Dave Chinner wrote:
> > On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> > > On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner wrote:
> > > > Profiles would be intere
On Wed, Apr 14, 2021 at 08:43:36AM -0600, Jens Axboe wrote:
> On 4/13/21 5:14 PM, Dave Chinner wrote:
> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> >> On 4/13/21 1:51 AM, SeongJae Park wrote:
> >>> From: SeongJae Park
> >>>
> >&
On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner wrote:
> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> > > On 4/13/21 1:51 AM, SeongJae Park wrote:
> > > > From: SeongJ
atching page cache removal better (e.g. fewer, larger
batches) and so spending less time contending on the mapping tree
lock...
IOWs, I suspect this result might actually be a result of less lock
contention due to a change in batch processing characteristics of
the new algorithm rather than it being a "better" algorithm...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Apr 13, 2021 at 01:18:35AM +0200, Thomas Gleixner wrote:
> Dave,
>
> On Tue, Apr 13 2021 at 08:15, Dave Chinner wrote:
> > On Mon, Apr 12, 2021 at 05:20:53PM +0200, Thomas Gleixner wrote:
> >> On Wed, Apr 07 2021 at 07:22, Dave Chinner wrote:
> >
On Mon, Apr 12, 2021 at 05:20:53PM +0200, Thomas Gleixner wrote:
> Dave,
>
> On Wed, Apr 07 2021 at 07:22, Dave Chinner wrote:
> > On Tue, Apr 06, 2021 at 02:28:34PM +0100, Matthew Wilcox wrote:
> >> On Tue, Apr 06, 2021 at 10:33:43PM +1000, Dave Chinner wrote:
ives you a fairly accurate picture of the
page cache usage within the container.
This has none of the issues that arise from "sb != mnt_ns" that
walking superblocks and inode lists have, and it doesn't require you
to play games with mounts, superblocks and inode references
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
could be based on fstype - most
virtual filesystems that expose system information do not really
need full memcg awareness because they are generally only visible to
a single memcg instance...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Apr 06, 2021 at 02:28:34PM +0100, Matthew Wilcox wrote:
> On Tue, Apr 06, 2021 at 10:33:43PM +1000, Dave Chinner wrote:
> > +++ b/fs/inode.c
> > @@ -57,8 +57,7 @@
> >
> > static unsigned int i_hash_mask __read_mostly;
> > static unsigned int i_hash
From: Dave Chinner
Because scalability of the global inode_hash_lock really, really
sucks and prevents me from doing scalability characterisation and
analysis of bcachefs algorithms.
Profiles of a 32-way concurrent create of 51.2m inodes with fsmark
on a couple of different filesystems
From: Dave Chinner
in preparation for switching the VFS inode cache over the hlist_bl
lists, we nee dto be able to fake a list node that looks like it is
hased for correct operation of filesystems that don't directly use
the VFS indoe cache.
Signed-off-by: Dave Chinner
---
include/linux
From: Dave Chinner
In preparation for changing the inode hash table implementation.
Signed-off-by: Dave Chinner
---
fs/inode.c | 44 +---
1 file changed, 25 insertions(+), 19 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index a047ab306f9a
Hi folks,
Recently I've been doing some scalability characterisation of
various filesystems, and one of the limiting factors that has
prevented me from exploring filesystem characteristics is the
inode hash table. namely, the global inode_hash_lock that protects
it.
This has long been a problem,
On Thu, Mar 18, 2021 at 12:20:35PM -0700, Dan Williams wrote:
> On Wed, Mar 17, 2021 at 9:58 PM Dave Chinner wrote:
> >
> > On Wed, Mar 17, 2021 at 09:08:23PM -0700, Dan Williams wrote:
> > > Jason wondered why the get_user_pages_fast() path takes references
On Thu, Mar 18, 2021 at 12:20:35PM -0700, Dan Williams wrote:
> On Wed, Mar 17, 2021 at 9:58 PM Dave Chinner wrote:
> >
> > On Wed, Mar 17, 2021 at 09:08:23PM -0700, Dan Williams wrote:
> > > Jason wondered why the get_user_pages_fast() path takes references
SO, yeah, I think this should simply be a single ranged call to the
filesystem like:
->memory_failure(dev, 0, -1ULL)
to tell the filesystem that the entire backing device has gone away,
and leave the filesystem to handle failure entirely at the
filesystem level.
-Dave.
--
Dave Chinner
da...@fromorbit.com
SO, yeah, I think this should simply be a single ranged call to the
filesystem like:
->memory_failure(dev, 0, -1ULL)
to tell the filesystem that the entire backing device has gone away,
and leave the filesystem to handle failure entirely at the
filesystem level.
-Dave.
--
Dave Chinner
da...@fromorbit
truct cage" as in Compound pAGE
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
(truncates, fallocates, fsync, xattrs, unlink+link of tmpfile) - and this
> can take quite a long time. The cache needs to be more proactive in
> getting stuff committed as it goes along.
Workqueues give you an easy mechanism for async dispatch and
concurrency for synchronous operations
runs a conversion
transaction.
So, yeah, if you use FIEMAP to determine where data lies in a file
that is being actively modified, you're going get corrupt data
sooner rather than later. SEEK_HOLE/DATA are coherent with in
memory user data, so don't have this problem.
Cheers,
Dave.
--
Dave Chinne
runs a conversion
transaction.
So, yeah, if you use FIEMAP to determine where data lies in a file
that is being actively modified, you're going get corrupt data
sooner rather than later. SEEK_HOLE/DATA are coherent with in
memory user data, so don't have this problem.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ng data to be
inaccessible.
Hence "remove" notifications just don't work in the storage stack.
They need to be translated to block ranges going bad (i.e. media
errors), and reported to higher layers as bad ranges, not as device
removal.
The same goes for DAX devices. The moment
On Mon, Mar 01, 2021 at 07:33:28PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 6:42 PM Dave Chinner wrote:
> [..]
> > We do not need a DAX specific mechanism to tell us "DAX device
> > gone", we need a generic block device interface that tells us "
ng data to be
inaccessible.
Hence "remove" notifications just don't work in the storage stack.
They need to be translated to block ranges going bad (i.e. media
errors), and reported to higher layers as bad ranges, not as device
removal.
The same goes for DAX devices. The moment
On Mon, Mar 01, 2021 at 04:32:36PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 2:47 PM Dave Chinner wrote:
> > Now we have the filesytem people providing a mechanism for the pmem
> > devices to tell the filesystems about physical device failures so
> > they can
On Mon, Mar 01, 2021 at 12:55:53PM -0800, Dan Williams wrote:
> On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner wrote:
> >
> > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote:
> > > > On F
On Mon, Mar 01, 2021 at 07:33:28PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 6:42 PM Dave Chinner wrote:
> [..]
> > We do not need a DAX specific mechanism to tell us "DAX device
> > gone", we need a generic block device interface that tells us "
On Mon, Mar 01, 2021 at 04:32:36PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 2:47 PM Dave Chinner wrote:
> > Now we have the filesytem people providing a mechanism for the pmem
> > devices to tell the filesystems about physical device failures so
> > they can
On Mon, Mar 01, 2021 at 12:55:53PM -0800, Dan Williams wrote:
> On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner wrote:
> >
> > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote:
> > > > On F
On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote:
> > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote:
> > > > On Fri, Feb 26,
On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote:
> > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote:
> > > > On Fri, Feb 26,
On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote:
> > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner wrote:
> > > > > My imm
On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote:
> > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner wrote:
> > > > > My imm
On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner wrote:
> >
> > On Fri, Feb 26, 2021 at 11:24:53AM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong
> > > wrote:
>
On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner wrote:
> >
> > On Fri, Feb 26, 2021 at 11:24:53AM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong
> > > wrote:
>
X pages we get a new page fault. In processing the fault, the
filesystem will try to get direct access to the pmem from the block
device. This will get an ENODEV error from the block device because
because the backing store (pmem) has been unplugged and is no longer
there...
AFAICT, as long as pmem removal invalidates all the active ptes that
point at the pmem being removed, the filesystem doesn't need to
care about device removal at all, DAX or no DAX...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
X pages we get a new page fault. In processing the fault, the
filesystem will try to get direct access to the pmem from the block
device. This will get an ENODEV error from the block device because
because the backing store (pmem) has been unplugged and is no longer
there...
AFAICT, as long as pmem re
this
point about cross-device XCOPY at this point?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
this
point about cross-device XCOPY at this point?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Fri, Feb 12, 2021 at 03:54:48PM -0800, Darrick J. Wong wrote:
> On Sat, Feb 13, 2021 at 10:27:26AM +1100, Dave Chinner wrote:
> > On Fri, Feb 12, 2021 at 03:07:39PM -0800, Ian Lance Taylor wrote:
> > > On Fri, Feb 12, 2021 at 3:03 PM Dave Chinner wrote:
> > > >
On Fri, Feb 12, 2021 at 03:07:39PM -0800, Ian Lance Taylor wrote:
> On Fri, Feb 12, 2021 at 3:03 PM Dave Chinner wrote:
> >
> > On Fri, Feb 12, 2021 at 04:45:41PM +0100, Greg KH wrote:
> > > On Fri, Feb 12, 2021 at 07:33:57AM -0800, Ian Lance Taylor wrote:
> > > &
eaking? What changed in
> > the kernel that caused this? Procfs has been around for a _very_ long
> > time :)
>
> That would be because of (v5.3):
>
> 5dae222a5ff0 vfs: allow copy_file_range to copy across devices
>
> The intention of this change (series) was to allow
ism for copying data from one random file
descriptor to another.
The use of it as a general file copy mechanism in the Go system
library is incorrect and wrong. It is a userspace bug. Userspace
has done the wrong thing, userspace needs to be fixed.
-Dave.
--
Dave Chinner
da...@fromorbit.com
r a bound
workqueue, too, especially when you consider that the workqueue
completion code will merge sequential ioends into one ioend, hence
making the IO completion loop counts bigger and latency problems worse
rather than better...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ctories and files in the tree...
So, yeah, we do indeed do thousands of these fsxattr based
operations a second, sometimes tens of thousands a second or more,
and sometimes they are issued in bulk in performance critical paths
for container build/deployment operations
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
special zero length files that contain ephemeral data, userspace can't
actually tell that they contain data from userspace using stat(). So
as far as userspace is concerned, copy_file_range() correctly
returned zero bytes copied from a zero byte long file and there's
nothing more to do.
This zero length file behaviour is, fundamentally, a kernel
filesystem implementation bug, not a copy_file_range() bug.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Jan 26, 2021 at 11:50:50AM +0800, Nicolas Boichat wrote:
> On Tue, Jan 26, 2021 at 9:34 AM Dave Chinner wrote:
> >
> > On Mon, Jan 25, 2021 at 03:54:31PM +0800, Nicolas Boichat wrote:
> > > Hi copy_file_range experts,
> > >
> > > We hit this in
o read unconditionally from the file. Hence
it happily returns non-existent stale data from busted filesystem
implementations that allow data to be read from beyond EOF...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
vide the same benefit to all the filesystems that use it.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
vide the same benefit to all the filesystems that use it.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
On Fri, Jan 08, 2021 at 11:56:57AM -0500, Brian Foster wrote:
> On Fri, Jan 08, 2021 at 08:54:44AM +1100, Dave Chinner wrote:
> > e.g. we run the first transaction into the CIL, it steals the sapce
> > needed for the cil checkpoint headers for the transaciton. Then if
> &g
On Mon, Jan 11, 2021 at 11:38:48AM -0500, Brian Foster wrote:
> On Fri, Jan 08, 2021 at 11:56:57AM -0500, Brian Foster wrote:
> > On Fri, Jan 08, 2021 at 08:54:44AM +1100, Dave Chinner wrote:
> > > On Mon, Jan 04, 2021 at 11:23:53AM -0500, Brian Foster wrote:
> > > >
ll do if you crash or even just unmount/mount a
filesystem that doesn't persist it.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
so allow accrual of the work skipped on each memcg
be accounted across multiple calls to the shrinkers for the same
memcg. Hence as memory pressure within the memcg goes up, the
repeated calls to direct reclaim within that memcg will result in
all of the freeable items in each cache eventually being freed...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Fri, Jan 08, 2021 at 03:59:22PM +0800, Ming Lei wrote:
> On Thu, Jan 07, 2021 at 09:21:11AM +1100, Dave Chinner wrote:
> > On Wed, Jan 06, 2021 at 04:45:48PM +0800, Ming Lei wrote:
> > > On Tue, Jan 05, 2021 at 07:39:38PM +0100, Christoph Hellwig wrote:
> > > &
On Sun, Jan 03, 2021 at 05:03:33PM +0100, Donald Buczek wrote:
> On 02.01.21 23:44, Dave Chinner wrote:
> > On Sat, Jan 02, 2021 at 08:12:56PM +0100, Donald Buczek wrote:
> > > On 31.12.20 22:59, Dave Chinner wrote:
> > > > On Thu, Dec 31, 2020 at 12:48:5
On Mon, Jan 04, 2021 at 11:23:53AM -0500, Brian Foster wrote:
> On Thu, Dec 31, 2020 at 09:16:11AM +1100, Dave Chinner wrote:
> > On Wed, Dec 30, 2020 at 12:56:27AM +0100, Donald Buczek wrote:
> > > If the value goes below the limit while some threads are
> > > already
ine whether we should do a large or small bio vec allocation
in the iomap writeback path...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Jan 02, 2021 at 08:12:56PM +0100, Donald Buczek wrote:
> On 31.12.20 22:59, Dave Chinner wrote:
> > On Thu, Dec 31, 2020 at 12:48:56PM +0100, Donald Buczek wrote:
> > > On 30.12.20 23:16, Dave Chinner wrote:
> > One could argue that, but one should al
that lifts of the context setting up into
xfs_trans_alloc() back into the patchset before adding the
current->journal functionality patch.
Also, you need to test XFS code with CONFIG_XFS_DEBUG=y so that
asserts are actually built into the code and exercised, because this
ASSERT should have fired on the first rolling transaction that the
kernel executes...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
that lifts of the context setting up into
xfs_trans_alloc() back into the patchset before adding the
current->journal functionality patch.
Also, you need to test XFS code with CONFIG_XFS_DEBUG=y so that
asserts are actually built into the code and exercised, because this
ASSERT should have fired o
On Thu, Dec 31, 2020 at 12:48:56PM +0100, Donald Buczek wrote:
> On 30.12.20 23:16, Dave Chinner wrote:
> > On Wed, Dec 30, 2020 at 12:56:27AM +0100, Donald Buczek wrote:
> > > Threads, which committed items to the CIL, wait in the
> > > xc_push_wait waitqueue when use
wake_up_all(>xc_push_wait);
That just smells wrong to me. It *might* be correct, but this
condition should pair with the sleep condition, as space used by a
CIL context should never actually decrease
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
301 - 400 of 5863 matches
Mail list logo