On Thu, Mar 30, 2017 at 02:35:32PM -0400, Jeff Layton wrote:
> On Thu, 2017-03-30 at 12:12 -0400, J. Bruce Fields wrote:
> > On Thu, Mar 30, 2017 at 07:11:48AM -0400, Jeff Layton wrote:
> > > On Thu, 2017-03-30 at 08:47 +0200, Jan Kara wrote:
> > > > Because if above is acceptable we could make
On Thu, Mar 30, 2017 at 10:41:37AM +1100, Dave Chinner wrote:
> On Wed, Mar 29, 2017 at 01:54:31PM -0400, Jeff Layton wrote:
> > On Wed, 2017-03-29 at 13:15 +0200, Jan Kara wrote:
> > > On Tue 21-03-17 14:46:53, Jeff Layton wrote:
> > > > On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote:
>
On Mon, Apr 03, 2017 at 11:01:39AM +0800, Qu Wenruo wrote:
> Btrfs allows inline file extent if and only if
> 1) It's at offset 0
> 2) It's smaller than min(max_inline, page_size)
>Although we don't specify if the size is before compression or after
>compression.
>At least according to
On Mon, Apr 03, 2017 at 03:09:23PM +0800, Qu Wenruo wrote:
> As long as we don't modify the on-disk data, fiemap result should always
> be constant.
>
> Operation like cycle mount and sleep should not affect fiemap result.
> While unfortunately, btrfs doesn't follow that behavior.
>
> Btrfs
On Tue, Apr 04 2017, J. Bruce Fields wrote:
> On Thu, Mar 30, 2017 at 02:35:32PM -0400, Jeff Layton wrote:
>> On Thu, 2017-03-30 at 12:12 -0400, J. Bruce Fields wrote:
>> > On Thu, Mar 30, 2017 at 07:11:48AM -0400, Jeff Layton wrote:
>> > > On Thu, 2017-03-30 at 08:47 +0200, Jan Kara wrote:
>> >
Btrfs allows inline file extent if and only if
1) It's at offset 0
2) It's smaller than min(max_inline, page_size)
Although we don't specify if the size is before compression or after
compression.
At least according to current behavior, we are only limiting the size
after compression.
At 04/05/2017 09:27 AM, Eryu Guan wrote:
On Mon, Apr 03, 2017 at 11:01:39AM +0800, Qu Wenruo wrote:
Btrfs allows inline file extent if and only if
1) It's at offset 0
2) It's smaller than min(max_inline, page_size)
Although we don't specify if the size is before compression or after
On 04/04/2017 03:41 AM, Christoph Hellwig wrote:
> On Tue, Apr 04, 2017 at 09:58:53AM +0200, Jan Kara wrote:
>> FS_NOWAIT looks a bit too generic given these are filesystem feature flags.
>> Can we call it FS_NOWAIT_IO?
>
> It's way to generic as it's a feature of the particular file_operations
At 04/04/2017 12:31 AM, Darrick J. Wong wrote:
On Mon, Apr 03, 2017 at 03:09:23PM +0800, Qu Wenruo wrote:
As long as we don't modify the on-disk data, fiemap result should always
be constant.
Operation like cycle mount and sleep should not affect fiemap result.
While unfortunately, btrfs
On Tue, Apr 04 2017, Dave Chinner wrote:
> On Mon, Apr 03, 2017 at 04:00:55PM +0200, Jan Kara wrote:
>> On Sun 02-04-17 09:05:26, Dave Chinner wrote:
>> > On Thu, Mar 30, 2017 at 12:12:31PM -0400, J. Bruce Fields wrote:
>> > > On Thu, Mar 30, 2017 at 07:11:48AM -0400, Jeff Layton wrote:
>> > > >
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the loop which submits and waits on the device flush requests.
By doing this it helps to further develop patches which would tune
the error-actions as needed.
The blkdev_issue_flush() will check if the write cache is enabled
before submitting the flush. This will add a code to fail fast if
its not.
Signed-off-by: Anand Jain
---
v1:
This patch will replace
[PATCH] btrfs: delete unused member nobarriers
v2:
- This patch
At 04/05/2017 10:35 AM, Eryu Guan wrote:
On Mon, Apr 03, 2017 at 03:09:23PM +0800, Qu Wenruo wrote:
As long as we don't modify the on-disk data, fiemap result should always
be constant.
Operation like cycle mount and sleep should not affect fiemap result.
While unfortunately, btrfs doesn't
The last consumer of nobarriers is removed by the commit [1] and sync
won't fail with EOPNOTSUPP anymore. Thus, now even when write cache is
write through it just return success without actually transpiring such
a request to the block device/lun.
[1]
commit
-Bo/Btrfs-cleanup-submit_one_bio/20170404-194545
config: x86_64-randconfig-x012-201714 (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64
All errors (new ones prefixed by >>):
fs
On Mon, Apr 03, 2017 at 04:00:55PM +0200, Jan Kara wrote:
> On Sun 02-04-17 09:05:26, Dave Chinner wrote:
> > On Thu, Mar 30, 2017 at 12:12:31PM -0400, J. Bruce Fields wrote:
> > > On Thu, Mar 30, 2017 at 07:11:48AM -0400, Jeff Layton wrote:
> > > > On Thu, 2017-03-30 at 08:47 +0200, Jan Kara
On 04/04/2017 12:02 AM, Robert Krig wrote:
> My storage array is BTRFS Raid1 with 4x8TB Drives.
> Wouldn't it be possible to simply disconnect two of those drives, mount
> with -o degraded and still have access (even if read-only) to all my data?
Just jumping on this point: my understanding of
On Tue, Apr 04, 2017 at 09:29:11AM -0400, Brian B wrote:
> On 04/04/2017 12:02 AM, Robert Krig wrote:
> > My storage array is BTRFS Raid1 with 4x8TB Drives.
> > Wouldn't it be possible to simply disconnect two of those drives, mount
> > with -o degraded and still have access (even if read-only) to
On 2017-04-04 09:29, Brian B wrote:
On 04/04/2017 12:02 AM, Robert Krig wrote:
My storage array is BTRFS Raid1 with 4x8TB Drives.
Wouldn't it be possible to simply disconnect two of those drives, mount
with -o degraded and still have access (even if read-only) to all my data?
Just jumping on
From: Filipe Manana
If the call to btrfs_qgroup_reserve_data() failed, we were leaking an
extent map structure. The failure can happen either due to an -ENOMEM
condition or, when quotas are enabled, due to -EDQUOT for example.
Signed-off-by: Filipe Manana
From: Filipe Manana
Currently when there are buffered writes that were not yet flushed and
they fall within allocated ranges of the file (that is, not in holes or
beyond eof assuming there are no prealloc extents beyond eof), btrfs
simply reports an incorrect number of used
From: Filipe Manana
Test that a filesystem's implementation of the stat(2) system call
reports correct values for the number of blocks allocated for a file
when there are delayed allocations.
This test is motivated by a bug in btrfs which is fixed by the following
path for
This patch adds FS_IOC_FSSETXATTR/FS_IOC_FSGETXATTR ioctl interface support
for btrfs. Extended file attributes are 32 bit values (FS_XFLAGS_SYNC,
FS_XFLAG_IMMUTABLE, etc) which have one-to-one mapping to the flag values
that can be stored in inode->i_flags (i.e. S_SYNC, S_IMMUTABLE, etc).
The
On Tue, Mar 21, 2017 at 6:50 AM, Qu Wenruo wrote:
> Hi Filipe,
>
> At 02/15/2017 04:35 AM, fdman...@kernel.org wrote:
>>
>> From: Filipe Manana
>>
>> Test that both a full and incremental btrfs send operation preserves file
>> holes.
>
>
> I found the
On Mon, Apr 3, 2017 at 10:02 PM, Robert Krig
wrote:
>
>
> On 03.04.2017 16:25, Robert Krig wrote:
>>
>> I'm gonna run a extensive memory check once I get home, since you
>> mentioned corrupt memory might be an issue here.
>> --
>> To unsubscribe from this list:
> [ ... ] I tried to use eSATA and ext4 first, but observed
> silent data corruption and irrecoverable kernel hangs --
> apparently, SATA is not really designed for external use.
SATA works for external use, eSATA works well, but what really
matters is the chipset of the adapter card.
In my
On Mon, Apr 03, 2017 at 01:45:47PM -0700, Liu Bo wrote:
> @bio_offset is passed into submit_bio_hook and is used at
> btrfs_wq_submit_bio(), but only dio code makes use of @bio_offset, so
> remove other dead code.
>
Please ignore this one.
Thanks,
-liubo
> Cc: David Sterba
>
On Tue, Apr 04, 2017 at 10:34:14PM +1000, Dave Chinner wrote:
> On Mon, Apr 03, 2017 at 04:00:55PM +0200, Jan Kara wrote:
> > What filesystems can or cannot easily do obviously differs. Ext4 has a
> > recovery flag set in superblock on RW mount/remount and cleared on
> > umount/RO remount.
>
>
Please make this a REQ_* flag so that it can be passed in the bio,
the request and as an argument to the get_request functions instead
of testing for a bio.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More
> + if (unaligned_io) {
> + /* If we are going to wait for other DIO to finish, bail */
> + if ((iocb->ki_flags & IOCB_NOWAIT) &&
> + atomic_read(>i_dio_count))
> + return -EAGAIN;
> inode_dio_wait(inode);
This checks
On Mon, Apr 03, 2017 at 01:53:00PM -0500, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> This flag informs kernel to bail out if an AIO request will block
> for reasons such as file allocations, or a writeback triggered,
> or would block while allocating requests while
On Tue, Apr 04, 2017 at 09:58:53AM +0200, Jan Kara wrote:
> FS_NOWAIT looks a bit too generic given these are filesystem feature flags.
> Can we call it FS_NOWAIT_IO?
It's way to generic as it's a feature of the particular file_operations
instance. But once we switch to using RWF_* we can just
On Mon 03-04-17 13:53:05, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> Return EAGAIN if any of the following checks fail for direct I/O:
> + i_rwsem is lockable
> + Writing beyond end of file (will trigger allocation)
> + Blocks are not allocated at the write
On 04/03/2017 08:06 PM, David Sterba wrote:
Please update the changelog to say why it's ok to remove it, eg. the
commit that removed the last user.
commit b25de9d6da49b1a8760a89672283128aa8c78345
Author: Christoph Hellwig
Date: Fri Apr 24 21:41:01 2015 +0200
block: remove
We have already assigned q from bdev_get_queue() so use it.
And rearrange the code for better view.
Signed-off-by: Anand Jain
---
fs/btrfs/volumes.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index
blkdev_issue_flush() or the empty buffer with the flag REQ_PREFLUSH
will never return BIO_EOPNOTSUPP as of now, however it should rather
or it may in future. So for now the BTRFS to have least affected by
this change at the blk layer, we can check if the device is flush
capable.
In this process,
As long as we don't modify the on-disk data, fiemap result should always
be constant.
Operation like cycle mount and sleep should not affect fiemap result.
While unfortunately, btrfs doesn't follow that behavior.
Btrfs fiemap sometimes return merged result, while after cycle mount, it
returns
37 matches
Mail list logo