There's no point in sharing the internal structure of lock value blocks
with user space.
Signed-off-by: Andreas Gruenbacher
---
fs/gfs2/glock.h | 1 +
fs/gfs2/incore.h | 1 +
fs/gfs2/rgrp.c | 10 ++
include/uapi/linux/gfs2_ondisk.h |
In gfs2_inode_lookup, we initialize inode->i_atime to the lowest
possibly value after gfs2_inode_refresh may already have been called.
This should be the other way around, but we didn't notice because
usually the inode type is known from the directory entry and so
gfs2_inode_lookup won't call
Hi,
On 15/01/2020 08:49, Andreas Gruenbacher wrote:
There's no point in sharing the internal structure of lock value blocks
with user space.
The reason that is in ondisk is that changing that structure is
something that needs to follow the same rules as changing the on disk
structures. So
On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse wrote:
> On 15/01/2020 08:49, Andreas Gruenbacher wrote:
> > There's no point in sharing the internal structure of lock value blocks
> > with user space.
>
> The reason that is in ondisk is that changing that structure is
> something that needs to
Hi,
On 15/01/2020 09:24, Andreas Gruenbacher wrote:
On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse wrote:
On 15/01/2020 08:49, Andreas Gruenbacher wrote:
There's no point in sharing the internal structure of lock value blocks
with user space.
The reason that is in ondisk is that changing
In gfs2_inode_lookup, we initialize inode->i_atime to the lowest
possibly value after gfs2_inode_refresh may already have been called.
This should be the other way around, but we didn't notice because
usually the inode type is known from the directory entry and so
gfs2_inode_lookup won't call
Oops, sorry for the duplicate post.
Andreas
On Tue, Jan 14, 2020 at 05:12:13PM +0100, Christoph Hellwig wrote:
> Hi all,
>
> Asynchronous read/write operations currently use a rather magic locking
> scheme, were access to file data is normally protected using a rw_semaphore,
> but if we are doing aio where the syscall returns to userspace
On Wed, Jan 15, 2020 at 09:24:28AM -0400, Jason Gunthorpe wrote:
> I was interested because you are talking about allowing the read/write side
> of a rw sem to be held across a return to user space/etc, which is the
> same basic problem.
No it is not; allowing the lock to be held across
On Wed, Jan 15, 2020 at 03:33:47PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 15, 2020 at 09:24:28AM -0400, Jason Gunthorpe wrote:
>
> > I was interested because you are talking about allowing the read/write side
> > of a rw sem to be held across a return to user space/etc, which is the
> > same
On Wed, Jan 15, 2020 at 07:56:14AM +0100, Christoph Hellwig wrote:
> On Tue, Jan 14, 2020 at 03:27:00PM -0400, Jason Gunthorpe wrote:
> > I've seen similar locking patterns quite a lot, enough I've thought
> > about having a dedicated locking primitive to do it. It really wants
> > to be a rwsem,
- Original Message -
> In gfs2_inode_lookup, we initialize inode->i_atime to the lowest
> possibly value after gfs2_inode_refresh may already have been called.
> This should be the other way around, but we didn't notice because
> usually the inode type is known from the directory entry and
On 15/01/2020 13:19, Bob Peterson wrote:
- Original Message -
Hi,
On 15/01/2020 09:24, Andreas Gruenbacher wrote:
On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse
wrote:
On 15/01/2020 08:49, Andreas Gruenbacher wrote:
There's no point in sharing the internal structure of lock
On Wed, Jan 15, 2020 at 09:24:28AM -0400, Jason Gunthorpe wrote:
> > Your requirement seems a little different, and in fact in many ways
> > similar to the percpu_ref primitive.
>
> I was interested because you are talking about allowing the read/write side
> of a rw sem to be held across a
Don't ignore the return value from generic_write_sync for the direct to
buffered I/O callback case when written is non-zero. Also don't bother
to call generic_write_sync for the pure direct I/O case, as iomap_dio_rw
already takes care of that.
Signed-off-by: Christoph Hellwig
---
Hi gfs2 maintainers,
can you take a look at this completely untested series? I found some
O_SYNC handling issues during code inspection for the direct I/O
locking revamp.
On 15/01/2020 08:58, Steven Whitehouse wrote:
Hi,
On 15/01/2020 08:49, Andreas Gruenbacher wrote:
There's no point in sharing the internal structure of lock value blocks
with user space.
The reason that is in ondisk is that changing that structure is
something that needs to follow the same
Only set current->backing_dev_info just around the buffered write calls
to prepare for the next fix.
Signed-off-by: Christoph Hellwig
---
fs/gfs2/file.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index
On Wed, Jan 15, 2020 at 04:36:14PM +0100, Christoph Hellwig wrote:
> synchronous and currently hack that up, so a version of the percpu_ref
> that actually waits for the other references to away like we hacked
> up various places seems to exactly suit your requirements.
Ah, yes, sounds like a
On 1/15/20 9:49 AM, Jason Gunthorpe wrote:
> On Wed, Jan 15, 2020 at 03:33:47PM +0100, Peter Zijlstra wrote:
>> On Wed, Jan 15, 2020 at 09:24:28AM -0400, Jason Gunthorpe wrote:
>>
>>> I was interested because you are talking about allowing the read/write side
>>> of a rw sem to be held across a
- Original Message -
> Hi,
>
> On 15/01/2020 09:24, Andreas Gruenbacher wrote:
> > On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse
> > wrote:
> >> On 15/01/2020 08:49, Andreas Gruenbacher wrote:
> >>> There's no point in sharing the internal structure of lock value blocks
> >>> with
21 matches
Mail list logo