61
I might be wrong, but if I'm not we're going to have to be very
careful about how guest VMs can access and manipulate host side
resources like the page cache.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtualization mailing list
Virtualiz
61
I might be wrong, but if I'm not we're going to have to be very
careful about how guest VMs can access and manipulate host side
resources like the page cache.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linux-nvdimm mailing list
Linux-nvdimm@l
61
I might be wrong, but if I'm not we're going to have to be very
careful about how guest VMs can access and manipulate host side
resources like the page cache.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Wed, Jan 09, 2019 at 11:08:57AM +0100, Jiri Kosina wrote:
> On Wed, 9 Jan 2019, Dave Chinner wrote:
>
> > FWIW, I just realised that the easiest, most reliable way to invalidate
> > the page cache over a file range is simply to do a O_DIRECT read on it.
>
> Neat,
On Wed, Jan 09, 2019 at 10:25:43AM -0800, Linus Torvalds wrote:
> On Tue, Jan 8, 2019 at 8:39 PM Dave Chinner wrote:
> >
> > FWIW, I just realised that the easiest, most reliable way to
> > invalidate the page cache over a file range is simply to do a
> > O_DIRECT
nce we will pretty much always see false positives in the freeze
> path". Hence, just temporarily disable lockdep in that path.
NACK. Turning off lockdep is not a solution, it just prevents
lockdep from finding and reporting real issues.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Wed, Jan 09, 2019 at 03:31:35AM +0100, Jiri Kosina wrote:
> On Wed, 9 Jan 2019, Dave Chinner wrote:
>
> > > But mincore is certainly the easiest interface, and the one that
> > > doesn't require much effort or setup.
> >
> > Off the top of my head, here'
On Tue, Jan 08, 2019 at 09:57:49AM -0800, Linus Torvalds wrote:
> On Mon, Jan 7, 2019 at 8:43 PM Dave Chinner wrote:
> >
> > So, I read the paper and before I was half way through it I figured
> > there are a bunch of other similar page cache invalidation attacks
>
On Tue, Jan 08, 2019 at 11:58:26AM -0500, Waiman Long wrote:
> On 01/07/2019 09:04 PM, Dave Chinner wrote:
> > On Mon, Jan 07, 2019 at 05:41:39PM -0500, Waiman Long wrote:
> >> On 01/07/2019 05:32 PM, Dave Chinner wrote:
> >>> On Mon, Jan 07, 2019 at 10:12:56AM -0500,
likely to break userspace you'd be shouting at them
that "we don't break userspace"
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Jan 07, 2019 at 05:41:39PM -0500, Waiman Long wrote:
> On 01/07/2019 05:32 PM, Dave Chinner wrote:
> > On Mon, Jan 07, 2019 at 10:12:56AM -0500, Waiman Long wrote:
> >> As newer systems have more and more IRQs and CPUs available in their
> >> system, the perfor
lved at all...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
able kernel in under a week. Essentially, the
"auto-backport" completely short-circuited the upstream QA
process.
IOWs, if you were looking for a case study to demonstrate the
failings of the current stable process, this is it.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Wed, Dec 19, 2018 at 12:35:40PM +0100, Jan Kara wrote:
> On Wed 19-12-18 21:28:25, Dave Chinner wrote:
> > On Tue, Dec 18, 2018 at 08:03:29PM -0700, Jason Gunthorpe wrote:
> > > On Wed, Dec 19, 2018 at 10:42:54AM +1100, Dave Chinner wrote:
> > >
> > > >
On Wed, Dec 19, 2018 at 02:30:05PM -0500, Theodore Y. Ts'o wrote:
> On Wed, Dec 19, 2018 at 01:19:53PM +1100, Dave Chinner wrote:
> > Putting metadata in user files beyond EOF doesn't work with XFS's
> > post-EOF speculative allocation algorithms.
> >
> > i.e. Filesys
On Wed, Dec 19, 2018 at 02:30:05PM -0500, Theodore Y. Ts'o wrote:
> On Wed, Dec 19, 2018 at 01:19:53PM +1100, Dave Chinner wrote:
> > Putting metadata in user files beyond EOF doesn't work with XFS's
> > post-EOF speculative allocation algorithms.
> >
> > i.e. Filesys
On Tue, Dec 18, 2018 at 08:03:29PM -0700, Jason Gunthorpe wrote:
> On Wed, Dec 19, 2018 at 10:42:54AM +1100, Dave Chinner wrote:
>
> > Essentially, what we are talking about is how to handle broken
> > hardware. I say we should just brun it with napalm and thermite
> >
kel tree somewhere else in the filesystem metadata
and providing a separate API to manipulate it avoids this problem.
It allows filesystems to keep their internal metadata and
security-related verification information in a separate channel (and
trust path) that is completely out of user data/access scope.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Dec 18, 2018 at 11:33:06AM +0100, Jan Kara wrote:
> On Mon 17-12-18 08:58:19, Dave Chinner wrote:
> > On Fri, Dec 14, 2018 at 04:43:21PM +0100, Jan Kara wrote:
> > > Hi!
> > >
> > > On Thu 13-12-18 08:46:41, Dave Chinner wrote:
> > > >
On Mon, Dec 17, 2018 at 10:34:43AM -0800, Matthew Wilcox wrote:
> On Mon, Dec 17, 2018 at 01:11:50PM -0500, Jerome Glisse wrote:
> > On Mon, Dec 17, 2018 at 08:58:19AM +1100, Dave Chinner wrote:
> > > Sure, that's a possibility, but that doesn't close off any race
> &
On Fri, Dec 14, 2018 at 04:43:21PM +0100, Jan Kara wrote:
> Hi!
>
> On Thu 13-12-18 08:46:41, Dave Chinner wrote:
> > On Wed, Dec 12, 2018 at 10:03:20AM -0500, Jerome Glisse wrote:
> > > On Mon, Dec 10, 2018 at 11:28:46AM +0100, Jan Kara wrote:
> > > > On
On Wed, Dec 12, 2018 at 09:02:29PM -0500, Jerome Glisse wrote:
> On Thu, Dec 13, 2018 at 11:51:19AM +1100, Dave Chinner wrote:
> > On Wed, Dec 12, 2018 at 04:59:31PM -0500, Jerome Glisse wrote:
> > > On Thu, Dec 13, 2018 at 08:46:41AM +1100, Dave Chinner wrote:
> > > &
not.
Hence it looks to me like the migration code is making invalid
assumptions about PagePrivate inferring reference counts and so the
migration code needs to be fixed. Requiring filesystems to work
around invalid assumptions in the migration code is a sure recipe
for problems with random file
On Wed, Dec 12, 2018 at 04:59:31PM -0500, Jerome Glisse wrote:
> On Thu, Dec 13, 2018 at 08:46:41AM +1100, Dave Chinner wrote:
> > On Wed, Dec 12, 2018 at 10:03:20AM -0500, Jerome Glisse wrote:
> > > On Mon, Dec 10, 2018 at 11:28:46AM +0100, Jan Kara wrote:
> > > > On
t; function to properly dirty the page without causing filesystem
> freak out.
I'm pretty sure you can't call ->page_mkwrite() from
put_user_page(), so I don't think this is workable at all.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
e RAM copies via the page cache, except
the struct pages point back to the same physical location rather
than having their own temporary, volatile copy of the data.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
o break physical data
sharing and so the page with the file data in it physically changes
during ->page_mkwrite (because DAX). Hence we need to restart the
page fault to map the new page correctly because the file no longer
points at the page that was originally faulted.
With this stashed-page-and-retry mechanism implemented for
->page_mkwrite, we could stash the new page in the vmf and tell the
fault to retry, and everything would just work. Without
->page_mkwrite support, it's just not that interesting and I have
higher priority things to deal with right now
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
o break physical data
sharing and so the page with the file data in it physically changes
during ->page_mkwrite (because DAX). Hence we need to restart the
page fault to map the new page correctly because the file no longer
points at the page that was originally faulted.
With this stashed-page-and-retry mechanism implemented for
->page_mkwrite, we could stash the new page in the vmf and tell the
fault to retry, and everything would just work. Without
->page_mkwrite support, it's just not that interesting and I have
higher priority things to deal with right now
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Dec 01, 2018 at 02:49:09AM -0500, Sasha Levin wrote:
> On Sat, Dec 01, 2018 at 08:50:05AM +1100, Dave Chinner wrote:
> >On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> >>On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >>>On Fri, No
On Sat, Dec 01, 2018 at 02:49:09AM -0500, Sasha Levin wrote:
> On Sat, Dec 01, 2018 at 08:50:05AM +1100, Dave Chinner wrote:
> >On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> >>On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >>>On Fri, No
On Fri, Nov 30, 2018 at 01:00:52PM -0500, Ric Wheeler wrote:
> On 11/30/18 7:55 AM, Dave Chinner wrote:
> >On Thu, Nov 29, 2018 at 06:53:14PM -0500, Ric Wheeler wrote:
> >>Other file systems also need to
> >>accommodate/probe behind the fictitious visible storage devic
On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> >>I stopped my tests at 5 billion ops yesterday (i.e. 20 billion ops
> >>agg
On Fri, Nov 30, 2018 at 05:14:41AM -0500, Sasha Levin wrote:
> On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> >On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> >>I stopped my tests at 5 billion ops yesterday (i.e. 20 billion ops
> >>agg
On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> > > On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> > > >
&
On Fri, Nov 30, 2018 at 09:22:03AM +0100, Greg KH wrote:
> On Fri, Nov 30, 2018 at 09:40:19AM +1100, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> > > On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> > > >
&
On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> >
> > Cherry picking only one of the 50-odd patches we've committed into
> > late 4.19 and 4.20 kernels to fix the problems we've found really
On Thu, Nov 29, 2018 at 01:47:56PM +0100, Greg KH wrote:
> On Thu, Nov 29, 2018 at 11:14:59PM +1100, Dave Chinner wrote:
> >
> > Cherry picking only one of the 50-odd patches we've committed into
> > late 4.19 and 4.20 kernels to fix the problems we've found really
ession test
fixes that, in some cases, took hundreds of millions of fsx ops to
expose.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ession test
fixes that, in some cases, took hundreds of millions of fsx ops to
expose.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Nov 29, 2018 at 12:55:43AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Thu, Nov 29, 2018 at 12:55:43AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Thu, Nov 29, 2018 at 01:00:59AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
On Thu, Nov 29, 2018 at 01:00:59AM -0500, Sasha Levin wrote:
> From: Dave Chinner
>
> [ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
>
> If we are doing sub-block dio that extends EOF, we need to zero
> the unused tail of the block to initialise the data in
other direct
IO paths, too?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ssion, so this value is going to change as the
IO progresses. What does making these partial IOs visible provide,
especially as they then get overwritten by the next submissions?
Indeed, how does one wait on all IOs in the DIO to complete if we
are only tracking one of many?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Nov 15, 2018 at 02:24:19PM -0800, Darrick J. Wong wrote:
> On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> > On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > > On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > > >
On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > > On 11/13/
On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > On 11/13/18 2:43 PM, Dave Chinner wrote:
> > > From: Dave Chinner
> > >
> > > A discard cleanup merged into 4.20-rc2 causes fstests x
On Wed, Nov 14, 2018 at 10:53:11AM +0800, Ming Lei wrote:
> On Wed, Nov 14, 2018 at 5:44 AM Dave Chinner wrote:
> >
> > From: Dave Chinner
> >
> > A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
> > fall into an endless loop in the discard code.
From: Dave Chinner
A discard cleanup merged into 4.20-rc2 causes fstests xfs/259 to
fall into an endless loop in the discard code. The test is creating
a device that is exactly 2^32 sectors in size to test mkfs boundary
conditions around the 32 bit sector overflow region.
mkfs issues a discard
On Mon, Nov 12, 2018 at 08:23:42PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 14:09 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> > > On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > > > I'm
On Mon, Nov 12, 2018 at 08:23:42PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 14:09 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> > > On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > > > I'm
On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > I'm not interested in making code fast if distro support engineers
> > can't debug problems on user systems easily. Optimising for
>
On Mon, Nov 12, 2018 at 08:54:10PM -0500, Theodore Y. Ts'o wrote:
> On Tue, Nov 13, 2018 at 12:18:05PM +1100, Dave Chinner wrote:
> > I'm not interested in making code fast if distro support engineers
> > can't debug problems on user systems easily. Optimising for
>
On Mon, Nov 12, 2018 at 02:30:01PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 08:45 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 02:12:08PM -0600, Eric Sandeen wrote:
> > > On 11/10/18 7:21 PM, Joe Perches wrote:
> > > > Reduce total object size quit
On Mon, Nov 12, 2018 at 02:30:01PM -0800, Joe Perches wrote:
> On Tue, 2018-11-13 at 08:45 +1100, Dave Chinner wrote:
> > On Mon, Nov 12, 2018 at 02:12:08PM -0600, Eric Sandeen wrote:
> > > On 11/10/18 7:21 PM, Joe Perches wrote:
> > > > Reduce total object size quit
traces. It flattens them way too much to
be able to tell how we got to a specific location in the code.
In reality, being able to find problems quickly and efficiently is
far more important to us than being able to run everything at
ludicrous speed
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
traces. It flattens them way too much to
be able to tell how we got to a specific location in the code.
In reality, being able to find problems quickly and efficiently is
far more important to us than being able to run everything at
ludicrous speed
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
columns is preferred" but they are wrong.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
columns is preferred" but they are wrong.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sn't solve the problem.
The problem is that this specific implementation of per-cpu
counters need to be summed on every read. Hence when you have a huge
number of CPUs each per-cpu iteration that takes a substantial
amount of time.
If only we had percpu counters that had a fixed, extremely low read
overhead that doesn't care about the number of CPUs in the
machine
Oh, wait, we do: percpu_counters.[ch].
This all seems like a counter implementation deficiency to me, not
an interface problem...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
sn't solve the problem.
The problem is that this specific implementation of per-cpu
counters need to be summed on every read. Hence when you have a huge
number of CPUs each per-cpu iteration that takes a substantial
amount of time.
If only we had percpu counters that had a fixed, extremely low read
overhead that doesn't care about the number of CPUs in the
machine
Oh, wait, we do: percpu_counters.[ch].
This all seems like a counter implementation deficiency to me, not
an interface problem...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Tue, Nov 06, 2018 at 12:00:06PM +0100, Jan Kara wrote:
> On Tue 06-11-18 13:47:15, Dave Chinner wrote:
> > On Mon, Nov 05, 2018 at 04:26:04PM -0800, John Hubbard wrote:
> > > On 11/5/18 1:54 AM, Jan Kara wrote:
> > > > Hmm, have you tried larger buffer sizes? B
On Tue, Nov 06, 2018 at 12:00:06PM +0100, Jan Kara wrote:
> On Tue 06-11-18 13:47:15, Dave Chinner wrote:
> > On Mon, Nov 05, 2018 at 04:26:04PM -0800, John Hubbard wrote:
> > > On 11/5/18 1:54 AM, Jan Kara wrote:
> > > > Hmm, have you tried larger buffer sizes? B
'd argue that the IO latency impact is far worse than the a 20%
throughput drop.
i.e. You can make up for throughput drops by running a deeper
queue/more dispatch threads, but you can't reduce IO latency at
all...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
'd argue that the IO latency impact is far worse than the a 20%
throughput drop.
i.e. You can make up for throughput drops by running a deeper
queue/more dispatch threads, but you can't reduce IO latency at
all...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Sat, Nov 03, 2018 at 10:13:37AM -0700, Linus Torvalds wrote:
> On Fri, Nov 2, 2018 at 4:36 PM Dave Chinner wrote:
> >
> > On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> > >
> > > I don't love the timing of this at the end of the merge window
On Sat, Nov 03, 2018 at 10:13:37AM -0700, Linus Torvalds wrote:
> On Fri, Nov 2, 2018 at 4:36 PM Dave Chinner wrote:
> >
> > On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> > >
> > > I don't love the timing of this at the end of the merge window
On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> On Thu, Nov 1, 2018 at 10:15 PM Dave Chinner wrote:
> >
> > Can you please pull update containing a rework of the VFS clone and
> > dedupe file range infrastructure from the tag listed below?
>
&
On Fri, Nov 02, 2018 at 09:35:23AM -0700, Linus Torvalds wrote:
> On Thu, Nov 1, 2018 at 10:15 PM Dave Chinner wrote:
> >
> > Can you please pull update containing a rework of the VFS clone and
> > dedupe file range infrastructure from the tag listed below?
>
&
k.h | 15 +-
include/linux/fs.h| 55 --
mm/filemap.c | 146 +++---
20 files changed, 734 insertions(+), 596 deletions(-)
--
Dave Chinner
da...@fromorbit.com
k.h | 15 +-
include/linux/fs.h| 55 --
mm/filemap.c | 146 +++---
20 files changed, 734 insertions(+), 596 deletions(-)
--
Dave Chinner
da...@fromorbit.com
On Wed, Oct 31, 2018 at 05:59:17AM +, y-g...@fujitsu.com wrote:
> > On Mon, Oct 29, 2018 at 11:30:41PM -0700, Dan Williams wrote:
> > > On Thu, Oct 18, 2018 at 5:58 PM Dave Chinner wrote:
> > In summary:
> >
> > MAP_DIRECT is an access hint.
>
tream maintainer when your tree
> is submitted for merging. You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
Looks ok. I didn't expect this conflict, but looks simple enough
to resolve. Thanks!
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
tream maintainer when your tree
> is submitted for merging. You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
Looks ok. I didn't expect this conflict, but looks simple enough
to resolve. Thanks!
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
->remap_file_range(). See Documentation/filesystems/vfs.txt for more
> + information.
Looks good - I knew about this one from merging back into a recent
Linus kernel.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
->remap_file_range(). See Documentation/filesystems/vfs.txt for more
> + information.
Looks good - I knew about this one from merging back into a recent
Linus kernel.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
't validate the input
properly".
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
't validate the input
properly".
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ut the cycle is not valid.
And that's the problem. Neither the head or tail blocks are
validated before they are used. CRC checking of the head and tail
blocks comes later
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
ut the cycle is not valid.
And that's the problem. Neither the head or tail blocks are
validated before they are used. CRC checking of the head and tail
blocks comes later
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
in xrep_findroot_block
Dave Chinner (2):
xfs: issue log message on user force shutdown
xfs: fix use-after-free race in xfs_buf_rele
fs/xfs/libxfs/xfs_attr.c | 236 -
fs/xfs/{ => libxfs}/xfs_attr.h | 2 +
fs/xfs/libxfs/xfs_bmap.c |
in xrep_findroot_block
Dave Chinner (2):
xfs: issue log message on user force shutdown
xfs: fix use-after-free race in xfs_buf_rele
fs/xfs/libxfs/xfs_attr.c | 236 -
fs/xfs/{ => libxfs}/xfs_attr.h | 2 +
fs/xfs/libxfs/xfs_bmap.c |
loc() directly, seems not necessary to
> introduce this change in block layer any more given 512-aligned buffer
> should be fine everywhere.
>
> The only benefit to make it as block helper is that the offset or size
> can be checked with q->dma_alignment.
>
> Dave/Jens, do you think which way is better? Put allocation as block
> helper or fs uses page_frag_alloc() directly for allocating 512*N-byte
> buffer(total size is less than PAGE_SIZE)?
Cristoph has already said he's looking at using page_frag_alloc()
directly in XFS
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
age cache resources are not being
> consumed, and that the kernel is handling metadata synchronization for
> any write-faults.
Yes, we need to do that, but not at the cost of having the API
prevent apps from ever being able to use direct access + msync/fsync
data integrity operations.
C
On Thu, Oct 18, 2018 at 04:55:55PM +0200, Jan Kara wrote:
> On Thu 18-10-18 11:25:10, Dave Chinner wrote:
> > On Wed, Oct 17, 2018 at 04:23:50PM -0400, Jeff Moyer wrote:
> > > MAP_SYNC
> > > - file system guarantees that metadata required to reach faulted-in file
and this is what I think you were proposing, Jan:
>
> madvise flag, MADV_DIRECT_ACCESS
> - same semantics as MAP_DIRECT, but specified via the madvise system call
Seems to be the equivalent of fcntl(F_SETFL, O_DIRECT). Makes sense
to have both MAP_DIRECT and MADV_DIRECT_ACCESS to me - one is an
init time flag, the other is a run time flag.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote:
> On 10/12/18 8:55 PM, Dave Chinner wrote:
> > On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com wrote:
> >> From: John Hubbard
> [...]
> >> diff --git a/include/linux/mm_types.h b/inclu
On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote:
> On 10/12/18 8:55 PM, Dave Chinner wrote:
> > On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com wrote:
> >> From: John Hubbard
> [...]
> >> diff --git a/include/linux/mm_types.h b/inclu
read/write direct IO and so the pages
passed to gup will be on the active/inactive LRUs. hence I can't see
how you can have dual use of the LRU list head like this
What am I missing here?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
read/write direct IO and so the pages
passed to gup will be on the active/inactive LRUs. hence I can't see
how you can have dual use of the LRU list head like this
What am I missing here?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Oct 11, 2018 at 11:27:35AM -0600, Jonathan Corbet wrote:
> On Sat, 6 Oct 2018 10:51:54 +1000 Dave Chinner
> wrote:
>
> > Can you let us know whether the CC-by-SA 4.0 license is
> > acceptible or not? That's really the only thing that we need
> > clarified at t
xfs: zero posteof blocks when cloning above eof
xfs: update ctime and remove suid before cloning files
Dave Chinner (2):
xfs: fix data corruption w/ unaligned dedupe ranges
xfs: fix data corruption w/ unaligned reflink ranges
fs/xfs/xfs_reflink.c | 200
xfs: zero posteof blocks when cloning above eof
xfs: update ctime and remove suid before cloning files
Dave Chinner (2):
xfs: fix data corruption w/ unaligned dedupe ranges
xfs: fix data corruption w/ unaligned reflink ranges
fs/xfs/xfs_reflink.c | 200
On Fri, Oct 05, 2018 at 07:01:20PM -0600, Jonathan Corbet wrote:
> On Sat, 6 Oct 2018 10:51:54 +1000
> Dave Chinner wrote:
>
> > Can you let us know whether the CC-by-SA 4.0 license is acceptible
> > or not? That's really the only thing that we need clarified at this
> &
ow whether the CC-by-SA 4.0 license is acceptible
or not? That's really the only thing that we need clarified at this
point - if it's OK I'll to pull this into the XFS tree for the 4.20
merge window. If not, we'll go back to the drawing board
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
gh-dpi monitor
made it almost impossible to read even though I have no eyesight
problems
Acked-by: Dave Chinner
-Dave.
--
Dave Chinner
da...@fromorbit.com
On Fri, Oct 05, 2018 at 12:46:40PM -0700, Andrew Morton wrote:
> On Fri, 5 Oct 2018 15:45:26 +1000 Dave Chinner wrote:
>
> > From: Dave Chinner
> >
> > We've recently seen a workload on XFS filesystems with a repeatable
> > deadlock between background
On Fri, Oct 05, 2018 at 12:46:40PM -0700, Andrew Morton wrote:
> On Fri, 5 Oct 2018 15:45:26 +1000 Dave Chinner wrote:
>
> > From: Dave Chinner
> >
> > We've recently seen a workload on XFS filesystems with a repeatable
> > deadlock between background
From: Dave Chinner
We've recently seen a workload on XFS filesystems with a repeatable
deadlock between background writeback and a multi-process
application doing concurrent writes and fsyncs to a small range of a
file.
range_cyclic
writeback Process 1 Process 2
1001 - 1100 of 5863 matches
Mail list logo