o break physical data
sharing and so the page with the file data in it physically changes
during ->page_mkwrite (because DAX). Hence we need to restart the
page fault to map the new page correctly because the file no longer
points at the page that was originally faulted.
With this stashed-page-and-retry mechanism implemented for
->page_mkwrite, we could stash the new page in the vmf and tell the
fault to retry, and everything would just work. Without
->page_mkwrite support, it's just not that interesting and I have
higher priority things to deal with right now
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Dec 01, 2018 at 02:49:09AM -0500, Sasha Levin wrote:
> On Sat, Dec 01, 2018 at 08:50:05AM +1100, Dave Chinner wrote:
> >On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> >>On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >>>On Fri, No
On Sat, Dec 01, 2018 at 02:49:09AM -0500, Sasha Levin wrote:
> On Sat, Dec 01, 2018 at 08:50:05AM +1100, Dave Chinner wrote:
> >On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> >>On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >>>On Fri, No
On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> >>I stopped my tests at 5 billion ops yesterday (i.e. 20 billion ops
> >>agg
On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> >>I stopped my tests at 5 billion ops yesterday (i.e. 20 billion ops
> >>agg
On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> > > On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> > > >
&
On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> > > On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> > > >
&
On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> >
> > Cherry picking only one of the 50-odd patches we've committed into
> > late 4.19 and 4.20 kernels to fix the problems we've found really
On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> >
> > Cherry picking only one of the 50-odd patches we've committed into
> > late 4.19 and 4.20 kernels to fix the problems we've found really
ession test
fixes that, in some cases, took hundreds of millions of fsx ops to
expose.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ession test
fixes that, in some cases, took hundreds of millions of fsx ops to
expose.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Nov 29, 2018 at 12:55:43AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Thu, Nov 29, 2018 at 12:55:43AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Thu, Nov 29, 2018 at 01:00:59AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Thu, Nov 29, 2018 at 01:00:59AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Mon, Nov 12, 2018 at 08:23:42PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 14:09 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> > > On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > > > I'm
On Mon, Nov 12, 2018 at 08:23:42PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 14:09 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> > > On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > > > I'm
On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > I'm not interested in making code fast if distro support engineers
> > can't debug problems on user systems easily. Optimising for
>
On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > I'm not interested in making code fast if distro support engineers
> > can't debug problems on user systems easily. Optimising for
>
On Mon, Nov 12, 2018 at 02:30:01PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 08:45 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 02:12:08PM -0600, Eric Sandeen wrote:
> > > On 11/10/18 7:21 PM, Joe Perches wrote:
> > > > Reduce total object size quit
On Mon, Nov 12, 2018 at 02:30:01PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 08:45 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 02:12:08PM -0600, Eric Sandeen wrote:
> > > On 11/10/18 7:21 PM, Joe Perches wrote:
> > > > Reduce total object size quit
traces. It flattens them way too much to
be able to tell how we got to a specific location in the code.
In reality, being able to find problems quickly and efficiently is
far more important to us than being able to run everything at
ludicrous speed
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
traces. It flattens them way too much to
be able to tell how we got to a specific location in the code.
In reality, being able to find problems quickly and efficiently is
far more important to us than being able to run everything at
ludicrous speed
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
columns is preferred" but they are wrong.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
columns is preferred" but they are wrong.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sn't solve the problem.
The problem is that this specific implementation of per-cpu
counters need to be summed on every read. Hence when you have a huge
number of CPUs each per-cpu iteration that takes a substantial
amount of time.
If only we had percpu counters that had a fixed, extremely low read
overhead that doesn't care about the number of CPUs in the
machine
Oh, wait, we do: percpu_counters.[ch].
This all seems like a counter implementation deficiency to me, not
an interface problem...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sn't solve the problem.
The problem is that this specific implementation of per-cpu
counters need to be summed on every read. Hence when you have a huge
number of CPUs each per-cpu iteration that takes a substantial
amount of time.
If only we had percpu counters that had a fixed, extremely low read
overhead that doesn't care about the number of CPUs in the
machine
Oh, wait, we do: percpu_counters.[ch].
This all seems like a counter implementation deficiency to me, not
an interface problem...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Nov 06, 2018 at 12:00:06PM +0100, Jan Kara wrote:
> On Tue 06-11-18 13:47:15, Dave Chinner wrote:
> > On Mon, Nov 05, 2018 at 04:26:04PM -0800, John Hubbard wrote:
> > > On 11/5/18 1:54 AM, Jan Kara wrote:
> > > > Hmm, have you tried larger buffer sizes? B
On Tue, Nov 06, 2018 at 12:00:06PM +0100, Jan Kara wrote:
> On Tue 06-11-18 13:47:15, Dave Chinner wrote:
> > On Mon, Nov 05, 2018 at 04:26:04PM -0800, John Hubbard wrote:
> > > On 11/5/18 1:54 AM, Jan Kara wrote:
> > > > Hmm, have you tried larger buffer sizes? B
'd argue that the IO latency impact is far worse than the a 20%
throughput drop.
i.e. You can make up for throughput drops by running a deeper
queue/more dispatch threads, but you can't reduce IO latency at
all...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
'd argue that the IO latency impact is far worse than the a 20%
throughput drop.
i.e. You can make up for throughput drops by running a deeper
queue/more dispatch threads, but you can't reduce IO latency at
all...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Nov 03, 2018 at 10:13:37AM -0700, Linus Torvalds wrote:
> On Fri, Nov 2, 2018 at 4:36 PM Dave Chinner wrote:
> >
> > On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> > >
> > > I don't love the timing of this at the end of the merge window
On Sat, Nov 03, 2018 at 10:13:37AM -0700, Linus Torvalds wrote:
> On Fri, Nov 2, 2018 at 4:36 PM Dave Chinner wrote:
> >
> > On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> > >
> > > I don't love the timing of this at the end of the merge window
On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> On Thu, Nov 1, 2018 at 10:15 PM Dave Chinner wrote:
> >
> > Can you please pull update containing a rework of the VFS clone and
> > dedupe file range infrastructure from the tag listed below?
>
&
On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> On Thu, Nov 1, 2018 at 10:15 PM Dave Chinner wrote:
> >
> > Can you please pull update containing a rework of the VFS clone and
> > dedupe file range infrastructure from the tag listed below?
>
&
k.h | 15 +-
include/linux/fs.h| 55 --
mm/filemap.c | 146 +++---
20 files changed, 734 insertions(+), 596 deletions(-)
--
Dave Chinner
da...@fromorbit.com
k.h | 15 +-
include/linux/fs.h| 55 --
mm/filemap.c | 146 +++---
20 files changed, 734 insertions(+), 596 deletions(-)
--
Dave Chinner
da...@fromorbit.com
tream maintainer when your tree
> is submitted for merging. You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
Looks ok. I didn't expect this conflict, but looks simple enough
to resolve. Thanks!
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
tream maintainer when your tree
> is submitted for merging. You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
Looks ok. I didn't expect this conflict, but looks simple enough
to resolve. Thanks!
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
->remap_file_range(). See Documentation/filesystems/vfs.txt for more
> + information.
Looks good - I knew about this one from merging back into a recent
Linus kernel.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
->remap_file_range(). See Documentation/filesystems/vfs.txt for more
> + information.
Looks good - I knew about this one from merging back into a recent
Linus kernel.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
't validate the input
properly".
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
't validate the input
properly".
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ut the cycle is not valid.
And that's the problem. Neither the head or tail blocks are
validated before they are used. CRC checking of the head and tail
blocks comes later
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ut the cycle is not valid.
And that's the problem. Neither the head or tail blocks are
validated before they are used. CRC checking of the head and tail
blocks comes later
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
in xrep_findroot_block
Dave Chinner (2):
xfs: issue log message on user force shutdown
xfs: fix use-after-free race in xfs_buf_rele
fs/xfs/libxfs/xfs_attr.c | 236 -
fs/xfs/{ => libxfs}/xfs_attr.h | 2 +
fs/xfs/libxfs/xfs_bmap.c |
in xrep_findroot_block
Dave Chinner (2):
xfs: issue log message on user force shutdown
xfs: fix use-after-free race in xfs_buf_rele
fs/xfs/libxfs/xfs_attr.c | 236 -
fs/xfs/{ => libxfs}/xfs_attr.h | 2 +
fs/xfs/libxfs/xfs_bmap.c |
On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote:
> On 10/12/18 8:55 PM, Dave Chinner wrote:
> > On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com wrote:
> >> From: John Hubbard
> [...]
> >> diff --git a/include/linux/mm_types.h b/inclu
On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote:
> On 10/12/18 8:55 PM, Dave Chinner wrote:
> > On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com wrote:
> >> From: John Hubbard
> [...]
> >> diff --git a/include/linux/mm_types.h b/inclu
read/write direct IO and so the pages
passed to gup will be on the active/inactive LRUs. hence I can't see
how you can have dual use of the LRU list head like this
What am I missing here?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
read/write direct IO and so the pages
passed to gup will be on the active/inactive LRUs. hence I can't see
how you can have dual use of the LRU list head like this
What am I missing here?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
xfs: zero posteof blocks when cloning above eof
xfs: update ctime and remove suid before cloning files
Dave Chinner (2):
xfs: fix data corruption w/ unaligned dedupe ranges
xfs: fix data corruption w/ unaligned reflink ranges
fs/xfs/xfs_reflink.c | 200
xfs: zero posteof blocks when cloning above eof
xfs: update ctime and remove suid before cloning files
Dave Chinner (2):
xfs: fix data corruption w/ unaligned dedupe ranges
xfs: fix data corruption w/ unaligned reflink ranges
fs/xfs/xfs_reflink.c | 200
On Fri, Oct 05, 2018 at 12:46:40PM -0700, Andrew Morton wrote:
> On Fri, 5 Oct 2018 15:45:26 +1000 Dave Chinner wrote:
>
> > From: Dave Chinner
> >
> > We've recently seen a workload on XFS filesystems with a repeatable
> > deadlock between background
On Fri, Oct 05, 2018 at 12:46:40PM -0700, Andrew Morton wrote:
> On Fri, 5 Oct 2018 15:45:26 +1000 Dave Chinner wrote:
>
> > From: Dave Chinner
> >
> > We've recently seen a workload on XFS filesystems with a repeatable
> > deadlock between background
From: Dave Chinner
We've recently seen a workload on XFS filesystems with a repeatable
deadlock between background writeback and a multi-process
application doing concurrent writes and fsyncs to a small range of a
file.
range_cyclic
writeback Process 1 Process 2
From: Dave Chinner
We've recently seen a workload on XFS filesystems with a repeatable
deadlock between background writeback and a multi-process
application doing concurrent writes and fsyncs to a small range of a
file.
range_cyclic
writeback Process 1 Process 2
in xfs_bmap_punch_delalloc_range
xfs: skip delalloc COW blocks in xfs_reflink_end_cow
Darrick J. Wong (1):
xfs: don't crash the vfs on a garbage inline symlink
Dave Chinner (3):
xfs: avoid lockdep false positives in xfs_trans_alloc
xfs: fix transaction leak in xfs_reflink_allocate_cow
in xfs_bmap_punch_delalloc_range
xfs: skip delalloc COW blocks in xfs_reflink_end_cow
Darrick J. Wong (1):
xfs: don't crash the vfs on a garbage inline symlink
Dave Chinner (3):
xfs: avoid lockdep false positives in xfs_trans_alloc
xfs: fix transaction leak in xfs_reflink_allocate_cow
On Wed, Oct 03, 2018 at 05:20:31AM +1000, James Morris wrote:
> On Tue, 2 Oct 2018, Dave Chinner wrote:
>
> > On Tue, Oct 02, 2018 at 06:08:16AM +1000, James Morris wrote:
> > > On Mon, 1 Oct 2018, Darrick J. Wong wrote:
> > >
> > > > If we /did/ replace
On Wed, Oct 03, 2018 at 05:20:31AM +1000, James Morris wrote:
> On Tue, 2 Oct 2018, Dave Chinner wrote:
>
> > On Tue, Oct 02, 2018 at 06:08:16AM +1000, James Morris wrote:
> > > On Mon, 1 Oct 2018, Darrick J. Wong wrote:
> > >
> > > > If we /did/ replace
On Mon, Oct 01, 2018 at 05:47:57AM -0700, Christoph Hellwig wrote:
> On Mon, Oct 01, 2018 at 04:11:27PM +1000, Dave Chinner wrote:
> > This reminds me so much of Linux mmap() in the mid-2000s - mmap()
> > worked for ext3 without being aware of page faults,
>
> And &q
On Mon, Oct 01, 2018 at 05:47:57AM -0700, Christoph Hellwig wrote:
> On Mon, Oct 01, 2018 at 04:11:27PM +1000, Dave Chinner wrote:
> > This reminds me so much of Linux mmap() in the mid-2000s - mmap()
> > worked for ext3 without being aware of page faults,
>
> And &q
in root we trust" model is pretty deeply ingrained up
and down the storage stack. I also suspect that most of our hardware
admin (not just storage) has similar assumptions about the security
model they operate in.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
in root we trust" model is pretty deeply ingrained up
and down the storage stack. I also suspect that most of our hardware
admin (not just storage) has similar assumptions about the security
model they operate in.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ls under
CAP_SYS_STORAGE_ADMIN?
Maybe I'm missing something, but I don't see how that improves the
situation w.r.t. locked down LSM configurations?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ls under
CAP_SYS_STORAGE_ADMIN?
Maybe I'm missing something, but I don't see how that improves the
situation w.r.t. locked down LSM configurations?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Oct 01, 2018 at 03:47:23PM +1000, Aleksa Sarai wrote:
> On 2018-10-01, Dave Chinner wrote:
> > > I've added some selftests for this, but it's not clear to me whether
> > > they should live here or in xfstests (as far as I can tell there are no
> > > other
On Mon, Oct 01, 2018 at 03:47:23PM +1000, Aleksa Sarai wrote:
> On 2018-10-01, Dave Chinner wrote:
> > > I've added some selftests for this, but it's not clear to me whether
> > > they should live here or in xfstests (as far as I can tell there are no
> > > other
the physical storage even though the filesystem has freed
the space it is accessing. This is a use after free of the physical
storage that the filesystem cannot control, and why DAX+RDMA is
disabled right now.
We could address these use-after-free situations via forcing RDMA to
use
the physical storage even though the filesystem has freed
the space it is accessing. This is a use after free of the physical
storage that the filesystem cannot control, and why DAX+RDMA is
disabled right now.
We could address these use-after-free situations via forcing RDMA to
use
generic VFS tests in xfstests). If you'd prefer them to be included in
> xfstests, let me know.
xfstests, please. That way the new functionality will get immediate
coverage by all the main filesystem development and distro QA
teams
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
generic VFS tests in xfstests). If you'd prefer them to be included in
> xfstests, let me know.
xfstests, please. That way the new functionality will get immediate
coverage by all the main filesystem development and distro QA
teams
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
pe definition to say this is allowed.
Systems restricted by LSMs to the point where CAP_SYS_ADMIN is not
trusted have exactly the same issues. i.e. there's nobody trusted by
the kernel to administer the storage stack, and nobody has defined a
workable security model that can prevent untrusted users from
violating the existing storage trust model
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
pe definition to say this is allowed.
Systems restricted by LSMs to the point where CAP_SYS_ADMIN is not
trusted have exactly the same issues. i.e. there's nobody trusted by
the kernel to administer the storage stack, and nobody has defined a
workable security model that can prevent untrusted users from
violating the existing storage trust model
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Fri, Sep 28, 2018 at 07:23:42AM +1000, James Morris wrote:
> On Thu, 27 Sep 2018, Dave Chinner wrote:
>
> > Sure, but there are so many CAP_SYS_ADMIN-only ioctls in the kernel
> > that have no LSM coverage that this is not an isolated problem that
> > people sett
On Fri, Sep 28, 2018 at 07:23:42AM +1000, James Morris wrote:
> On Thu, 27 Sep 2018, Dave Chinner wrote:
>
> > Sure, but there are so many CAP_SYS_ADMIN-only ioctls in the kernel
> > that have no LSM coverage that this is not an isolated problem that
> > people sett
On Wed, Sep 26, 2018 at 09:23:03AM -0400, Stephen Smalley wrote:
> On 09/25/2018 09:33 PM, Dave Chinner wrote:
> >On Tue, Sep 25, 2018 at 08:51:50PM -0400, TongZhang wrote:
> >>Hi,
> >>
> >>I'm bringing up this issue again to let of LSM developers know the
&
On Wed, Sep 26, 2018 at 09:23:03AM -0400, Stephen Smalley wrote:
> On 09/25/2018 09:33 PM, Dave Chinner wrote:
> >On Tue, Sep 25, 2018 at 08:51:50PM -0400, TongZhang wrote:
> >>Hi,
> >>
> >>I'm bringing up this issue again to let of LSM developers know the
&
On Wed, Sep 26, 2018 at 07:24:26PM +0100, Alan Cox wrote:
> On Wed, 26 Sep 2018 11:33:29 +1000
> Dave Chinner wrote:
>
> > On Tue, Sep 25, 2018 at 08:51:50PM -0400, TongZhang wrote:
> > > Hi,
> > >
> > > I'm bringing up this issue again to let o
On Wed, Sep 26, 2018 at 07:24:26PM +0100, Alan Cox wrote:
> On Wed, 26 Sep 2018 11:33:29 +1000
> Dave Chinner wrote:
>
> > On Tue, Sep 25, 2018 at 08:51:50PM -0400, TongZhang wrote:
> > > Hi,
> > >
> > > I'm bringing up this issue again to let o
nal filesystem interfaces used by trusted code and not general
user application facing APIs.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
nal filesystem interfaces used by trusted code and not general
user application facing APIs.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Sep 22, 2018 at 01:15:42AM +0100, Ben Hutchings wrote:
> 3.16.58-rc1 review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Dave Chinner
>
> commit afca6c5b2595fc44383919fba740c194b0b76aff upstream.
>
> A recent fuz
On Sat, Sep 22, 2018 at 01:15:42AM +0100, Ben Hutchings wrote:
> 3.16.58-rc1 review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Dave Chinner
>
> commit afca6c5b2595fc44383919fba740c194b0b76aff upstream.
>
> A recent fuz
On Sat, Sep 22, 2018 at 01:15:42AM +0100, Ben Hutchings wrote:
> 3.16.58-rc1 review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Dave Chinner
>
> commit ee457001ed6c6f31ddad69c24c1da8f377d8472d upstream.
>
> We recently
On Sat, Sep 22, 2018 at 01:15:42AM +0100, Ben Hutchings wrote:
> 3.16.58-rc1 review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Dave Chinner
>
> commit ee457001ed6c6f31ddad69c24c1da8f377d8472d upstream.
>
> We recently
tadata consistency is ensured after a crash.
> Thus, B is either the original B(or not exists) or has been replaced by A.
> The same to D.
>
> Is it possible that, after a crash, D has been replaced by C but B is still
> the original file(or not exists)?
Not for XFS, ext4, btrfs or
tadata consistency is ensured after a crash.
> Thus, B is either the original B(or not exists) or has been replaced by A.
> The same to D.
>
> Is it possible that, after a crash, D has been replaced by C but B is still
> the original file(or not exists)?
Not for XFS, ext4, btrfs or
On Mon, Sep 10, 2018 at 05:09:52AM -0700, swkhack wrote:
> the i_state init in the critical section,so as the list init should in it.
Why? What bug does this fix?
-Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Sep 10, 2018 at 05:09:52AM -0700, swkhack wrote:
> the i_state init in the critical section,so as the list init should in it.
Why? What bug does this fix?
-Dave.
--
Dave Chinner
da...@fromorbit.com
n that
applications can't do more flush/fsync operations than disk IOs is
not valid, and that performance of open-write-flush-close workloads
on modern filesystems isn't anywhere near as bad as you think it is.
To mangle a common saying into storage speak:
"Caches are for show, IO is for go"
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
n that
applications can't do more flush/fsync operations than disk IOs is
not valid, and that performance of open-write-flush-close workloads
on modern filesystems isn't anywhere near as bad as you think it is.
To mangle a common saying into storage speak:
"Caches are for show, IO is for go"
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ntially stupid idea:
>
> Implement a new class of swap space for backing dirty pages which fail
> to write back. Pages in this space survive reboots, essentially backing
> the implicit commitment POSIX establishes in the face of asynchronous
> writeback errors. Rather than evicting these pages as clean, they are
> swapped out to the persistent swap.
And when that "swap" area gets write errors, too? What then? We're
straight back to the same "what the hell do we do with the error"
problem.
Adding more turtles doesn't help solve this issue.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ntially stupid idea:
>
> Implement a new class of swap space for backing dirty pages which fail
> to write back. Pages in this space survive reboots, essentially backing
> the implicit commitment POSIX establishes in the face of asynchronous
> writeback errors. Rather than evicting these pages as clean, they are
> swapped out to the persistent swap.
And when that "swap" area gets write errors, too? What then? We're
straight back to the same "what the hell do we do with the error"
problem.
Adding more turtles doesn't help solve this issue.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Aug 27, 2018 at 11:34:13AM -0400, Waiman Long wrote:
> On 08/26/2018 08:21 PM, Dave Chinner wrote:
> > On Sun, Aug 26, 2018 at 04:53:14PM -0400, Waiman Long wrote:
> >> The current log space reservation code allows multiple wakeups of the
> >> same sleeping waite
On Mon, Aug 27, 2018 at 11:34:13AM -0400, Waiman Long wrote:
> On 08/26/2018 08:21 PM, Dave Chinner wrote:
> > On Sun, Aug 26, 2018 at 04:53:14PM -0400, Waiman Long wrote:
> >> The current log space reservation code allows multiple wakeups of the
> >> same sleeping waite
On Mon, Aug 27, 2018 at 12:39:06AM -0700, Christoph Hellwig wrote:
> On Mon, Aug 27, 2018 at 10:21:34AM +1000, Dave Chinner wrote:
> > tl; dr: Once you pass a certain point, ramdisks can be *much* slower
> > than SSDs on journal intensive workloads like AIM7. Hence it would be
&g
On Mon, Aug 27, 2018 at 12:39:06AM -0700, Christoph Hellwig wrote:
> On Mon, Aug 27, 2018 at 10:21:34AM +1000, Dave Chinner wrote:
> > tl; dr: Once you pass a certain point, ramdisks can be *much* slower
> > than SSDs on journal intensive workloads like AIM7. Hence it would be
&g
ion go
away? Can you please test both of these things and report the
results so we can properly evaluate the impact of these changes?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
401 - 500 of 3915 matches
Mail list logo