On Mon 11-12-17 16:55:30, Josef Bacik wrote:
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 356a814e7c8e..48de090f5a07 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -179,9 +179,19 @@ enum node_stat_item {
> NR_VMSCAN_IMMEDIATE,/*
When multiple pending snapshots referring the same source subvolume are
executed, enabled quota will cause root item corruption, where root
items are using old bytenr (no backref in extent tree).
This can be triggered by fstests btrfs/152.
The cause is when source subvolume is still dirty, extra
On Mon 11-12-17 16:55:28, Josef Bacik wrote:
> From: Josef Bacik
>
> This helper allows us to add an arbitrary amount to the fprop
> structures.
>
> Signed-off-by: Josef Bacik
Looks good. You can add:
Reviewed-by: Jan Kara
On Mon, 18 Dec 2017 16:09:30 +0100
Daniel Borkmann wrote:
> On 12/18/2017 10:51 AM, Masami Hiramatsu wrote:
> > On Fri, 15 Dec 2017 14:12:54 -0500
> > Josef Bacik wrote:
> >> From: Josef Bacik
> >>
> >> Error injection is sloppy and
On Fri, 15 Dec 2017 14:12:52 -0500
Josef Bacik wrote:
> From: Josef Bacik
>
> Using BPF we can override kprob'ed functions and return arbitrary
> values. Obviously this can be a bit unsafe, so make this feature opt-in
> for functions. Simply tag a
On 19.12.2017 08:05, Misono, Tomohiro wrote:
> Hello,
>
> On 2017/12/18 19:06, Nikolay Borisov wrote:
>>
>>
>> On 18.12.2017 12:03, Nikolay Borisov wrote:
>>> Currently if a mounted-btrfs instance is mounted for the 2nd time
>>> without first unmounting the first instance then we hit a memory
Hello,
On 2017/12/18 19:06, Nikolay Borisov wrote:
>
>
> On 18.12.2017 12:03, Nikolay Borisov wrote:
>> Currently if a mounted-btrfs instance is mounted for the 2nd time
>> without first unmounting the first instance then we hit a memory leak
>> in btrfs_mount_root due to the fs_info of the
Fixes: 61b4603b8338 ("buffer_head: separate out create_bh_bio() from
submit_bh_wbc()")
Signed-off-by: Fengguang Wu
---
buffer.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index eb15599..b793f6d 100644
---
tree: https://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
cgroup-btrfs
head: 2e032c72c43e8008be45f23376d9e24d75c3d85f
commit: 61b4603b83389326984a47e702a319a40d006f77 [3/5] buffer_head: separate
out create_bh_bio() from submit_bh_wbc()
reproduce:
# apt-get install
On Mon, Dec 18, 2017 at 8:26 PM, Chris Murphy wrote:
> On Mon, Dec 18, 2017 at 7:19 PM, Su Yue wrote:
>>>
>>
>> Did you apply all patches on this branch?
>
> Yes, I think so:
1032 git remote add qu https://github.com/adam900710/btrfs-progs
On Mon, Dec 18, 2017 at 7:19 PM, Su Yue wrote:
>
>
> On 12/19/2017 09:34 AM, Chris Murphy wrote:
>>
>> On Sun, Dec 17, 2017 at 8:07 PM, Su Yue
>> wrote:
>>>
>>> Hi,
>>>
>>> On 12/18/2017 10:50 AM, Chris Murphy wrote:
This is v4.14.
On 12/19/2017 09:34 AM, Chris Murphy wrote:
On Sun, Dec 17, 2017 at 8:07 PM, Su Yue
wrote:
Hi,
On 12/18/2017 10:50 AM, Chris Murphy wrote:
This is v4.14.
I've filed a bug which contains the build steps, versions. It's
crashing on all volumes I try it on so far.
On Sun, Dec 17, 2017 at 8:07 PM, Su Yue wrote:
> Hi,
>
> On 12/18/2017 10:50 AM, Chris Murphy wrote:
>>
>> This is v4.14.
>>
>> I've filed a bug which contains the build steps, versions. It's
>> crashing on all volumes I try it on so far.
>>
>>
>
> It was fixed by Qu's
On 03.12.2017 16:39 Martin Raiber wrote:
> Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
>> On 2017-11-27 00:37, Martin Raiber wrote:
>>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>>> I guess it is too late now, but I guess
On 12/15/2017 09:10 AM, Matthew Wilcox wrote:
> On Mon, Dec 11, 2017 at 03:10:22PM -0800, Randy Dunlap wrote:
>>> +The XArray does not support storing :c:func:`IS_ERR` pointers; some
>>> +conflict with data values and others conflict with entries the XArray
>>> +uses for its own purposes. If you
On 12/15/2017 04:34 AM, Matthew Wilcox wrote:
> On Thu, Dec 14, 2017 at 08:22:14PM -0800, Matthew Wilcox wrote:
>> On Mon, Dec 11, 2017 at 03:10:22PM -0800, Randy Dunlap wrote:
+A freshly-initialised XArray contains a ``NULL`` pointer at every index.
+Each non-``NULL`` entry in the array
On Mon, Dec 18, 2017 at 3:28 PM, Chris Murphy wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
>
>> Agreed. IMO degraded-raid1-single-chunk is an accidental feature
>> caused by [1], which we should revert back, since..
>>- balance
On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
> Agreed. IMO degraded-raid1-single-chunk is an accidental feature
> caused by [1], which we should revert back, since..
>- balance (to raid1 chunk) may fail if FS is near full
>- recovery (to raid1 chunk) will
On Mon, Dec 18, 2017 at 02:35:20PM -0500, Jeff Layton wrote:
> [PATCH] SQUASH: add memory barriers around i_version accesses
Why explicit memory barriers rather than annotating the operations
with the required semantics and getting the barriers the arch
requires automatically? I suspect this
>> The fact is, the only cases where this is really an issue is
>> if you've either got intermittently bad hardware, or are
>> dealing with external
> Well, the RAID1+ is all about the failing hardware.
>> storage devices. For the majority of people who are using
>> multi-device setups, the
On Mon, Dec 18, 2017 at 08:06:57 -0500, Austin S. Hemmelgarn wrote:
> The fact is, the only cases where this is really an issue is if you've
> either got intermittently bad hardware, or are dealing with external
Well, the RAID1+ is all about the failing hardware.
> storage devices. For the
On Mon, 2017-12-18 at 12:36 -0500, J. Bruce Fields wrote:
> On Mon, Dec 18, 2017 at 12:22:20PM -0500, Jeff Layton wrote:
> > On Mon, 2017-12-18 at 17:34 +0100, Jan Kara wrote:
> > > On Mon 18-12-17 10:11:56, Jeff Layton wrote:
> > > > static inline bool
> > > > inode_maybe_inc_iversion(struct
On Mon, 2017-12-18 at 10:11 -0500, Jeff Layton wrote:
> From: Jeff Layton
>
> Add a documentation blob that explains what the i_version field is, how
> it is expected to work, and how it is currently implemented by various
> filesystems.
>
> We already have
On Mon, Dec 18, 2017 at 12:22:20PM -0500, Jeff Layton wrote:
> On Mon, 2017-12-18 at 17:34 +0100, Jan Kara wrote:
> > On Mon 18-12-17 10:11:56, Jeff Layton wrote:
> > > static inline bool
> > > inode_maybe_inc_iversion(struct inode *inode, bool force)
> > > {
> > > - atomic64_t *ivp =
18.12.2017 19:49, Ulli Horlacher пишет:
> I want to mount an alternative subvolume of a btrfs filesystem.
> I can list the subvolumes when the filesystem is mounted, but how do I
> know them, when the filesystem is not mounted? Is there a query command?
>
> root@xerus:~# mount | grep /test
>
On Mon, 2017-12-18 at 17:34 +0100, Jan Kara wrote:
> On Mon 18-12-17 10:11:56, Jeff Layton wrote:
> > static inline bool
> > inode_maybe_inc_iversion(struct inode *inode, bool force)
> > {
> > - atomic64_t *ivp = (atomic64_t *)>i_version;
> > + u64 cur, old, new;
> >
> > -
I want to mount an alternative subvolume of a btrfs filesystem.
I can list the subvolumes when the filesystem is mounted, but how do I
know them, when the filesystem is not mounted? Is there a query command?
root@xerus:~# mount | grep /test
/dev/sdd4 on /test type btrfs
On Mon 18-12-17 10:11:56, Jeff Layton wrote:
> static inline bool
> inode_maybe_inc_iversion(struct inode *inode, bool force)
> {
> - atomic64_t *ivp = (atomic64_t *)>i_version;
> + u64 cur, old, new;
>
> - atomic64_inc(ivp);
> + cur = (u64)atomic64_read(>i_version);
> +
On 2017-12-18 09:39, Anand Jain wrote:
Now the procedure to assemble the disks would be to continue to mount
the good set first without the device set on which new data can be
ignored, and later run btrfs device scan to bring in the missing device
and complete the RAID group which then shall
On Mon 18-12-17 10:11:53, Jeff Layton wrote:
> From: Jeff Layton
>
> We only really need to update i_version if someone has queried for it
> since we last incremented it. By doing that, we can avoid having to
> update the inode if the times haven't changed.
>
> If the times
From: Jeff Layton
Add a documentation blob that explains what the i_version field is, how
it is expected to work, and how it is currently implemented by various
filesystems.
We already have inode_inc_iversion. Add several other functions for
manipulating and accessing the
From: Jeff Layton
v3:
- move i_version handling functions to new header file
- document that the kernel-managed i_version implementation will appear to
increase over time
- fix inode_cmp_iversion to handle wraparound correctly
v2:
- xfs should use inode_peek_iversion
From: Jeff Layton
Signed-off-by: Jeff Layton
---
fs/affs/amigaffs.c | 5 +++--
fs/affs/dir.c | 5 +++--
fs/affs/super.c| 3 ++-
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/fs/affs/amigaffs.c b/fs/affs/amigaffs.c
index
From: Jeff Layton
Signed-off-by: Jeff Layton
---
fs/fat/dir.c | 3 ++-
fs/fat/inode.c | 9 +
fs/fat/namei_msdos.c | 7 ---
fs/fat/namei_vfat.c | 22 +++---
4 files changed, 22 insertions(+), 19 deletions(-)
From: Jeff Layton
The rationale for taking the i_lock when incrementing this value is
lost in antiquity. The readers of the field don't take it (at least
not universally), so my assumption is that it was only done here to
serialize incrementors.
If that is indeed the case,
From: Jeff Layton
For AFS, it's generally treated as an opaque value, so we use the
*_raw variants of the API here.
Note that AFS has quite a different definition for this counter. AFS
only increments it on changes to the data, not for the metadata. We'll
need to reconcile
From: Jeff Layton
Signed-off-by: Jeff Layton
---
fs/exofs/dir.c | 9 +
fs/exofs/super.c | 3 ++-
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/fs/exofs/dir.c b/fs/exofs/dir.c
index 98233a97b7b8..c5a53fcc43ea 100644
---
From: Jeff Layton
Signed-off-by: Jeff Layton
---
fs/btrfs/delayed-inode.c | 7 +--
fs/btrfs/file.c | 1 +
fs/btrfs/inode.c | 7 +--
fs/btrfs/ioctl.c | 1 +
fs/btrfs/tree-log.c | 4 +++-
fs/btrfs/xattr.c | 1 +
From: Jeff Layton
Signed-off-by: Jeff Layton
Reviewed-by: Jan Kara
---
fs/ext2/dir.c | 9 +
fs/ext2/super.c | 5 +++--
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c
index
From: Jeff Layton
For NFS, we just use the "raw" API since the i_version is mostly
managed by the server. The exception there is when the client
holds a write delegation, but we only need to bump it once
there anyway to handle CB_GETATTR.
Signed-off-by: Jeff Layton
From: Jeff Layton
Signed-off-by: Jeff Layton
---
fs/xfs/libxfs/xfs_inode_buf.c | 7 +--
fs/xfs/xfs_icache.c | 5 +++--
fs/xfs/xfs_inode.c| 3 ++-
fs/xfs/xfs_inode_item.c | 3 ++-
fs/xfs/xfs_trans_inode.c | 4 +++-
5
From: Jeff Layton
Signed-off-by: Jeff Layton
Acked-by: Theodore Ts'o
---
fs/ext4/dir.c| 9 +
fs/ext4/inline.c | 7 ---
fs/ext4/inode.c | 13 +
fs/ext4/ioctl.c | 3 ++-
fs/ext4/namei.c | 5 +++--
From: Jeff Layton
Mostly just making sure we use the "get" wrappers so we know when
it is being fetched for later use.
Signed-off-by: Jeff Layton
---
fs/nfsd/nfsfh.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/nfsd/nfsfh.h
From: Jeff Layton
Signed-off-by: Jeff Layton
---
fs/ufs/dir.c | 9 +
fs/ufs/inode.c | 3 ++-
fs/ufs/super.c | 3 ++-
3 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/fs/ufs/dir.c b/fs/ufs/dir.c
index 2edc1755b7c5..50dfce000864
From: Jeff Layton
Signed-off-by: Jeff Layton
---
security/integrity/ima/ima_api.c | 3 ++-
security/integrity/ima/ima_main.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/security/integrity/ima/ima_api.c
From: Jeff Layton
Signed-off-by: Jeff Layton
Reviewed-by: Jan Kara
---
fs/ocfs2/dir.c | 15 ---
fs/ocfs2/inode.c| 3 ++-
fs/ocfs2/namei.c| 3 ++-
fs/ocfs2/quota_global.c | 3 ++-
4 files changed, 14
From: Jeff Layton
If XFS_ILOG_CORE is already set then go ahead and increment it.
Signed-off-by: Jeff Layton
---
fs/xfs/xfs_trans_inode.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/xfs/xfs_trans_inode.c
From: Jeff Layton
We only really need to update i_version if someone has queried for it
since we last incremented it. By doing that, we can avoid having to
update the inode if the times haven't changed.
If the times have changed, then we go ahead and forcibly increment the
From: Jeff Layton
At this point, we know that "now" and the file times may differ, and we
suspect that the i_version has been flagged to be bumped. Attempt to
bump the i_version, and only mark the inode dirty if that actually
occurred or if one of the times was updated.
From: Jeff Layton
Since i_version is mostly treated as an opaque value, we can exploit that
fact to avoid incrementing it when no one is watching. With that change,
we can avoid incrementing the counter on writes, unless someone has
queried for it since it was last
On 12/18/2017 10:51 AM, Masami Hiramatsu wrote:
> On Fri, 15 Dec 2017 14:12:54 -0500
> Josef Bacik wrote:
>> From: Josef Bacik
>>
>> Error injection is sloppy and very ad-hoc. BPF could fill this niche
>> perfectly with it's kprobe functionality. We could
On 12/18/2017 08:52 PM, Nikolay Borisov wrote:
On 17.12.2017 15:52, Anand Jain wrote:
In two device configs of RAID1/RAID5 where one device can be missing
in the degraded mount, or in the configs such as four devices RAID6
where two devices can be missing, in these type of configs it can
Now the procedure to assemble the disks would be to continue to mount
the good set first without the device set on which new data can be
ignored, and later run btrfs device scan to bring in the missing device
and complete the RAID group which then shall reset the flag
what was intended that it should be able to detect a previous
member block-device becoming available again as a different
device inode, which currently is very dangerous in some vital
situations.
Peter, What's the dangerous part here ?
If device disappears, the patch [4] will completely
On 12/18/2017 08:01 PM, Nikolay Borisov wrote:
On 17.12.2017 05:04, Anand Jain wrote:
If the device is not present at the time of (-o degrade) mount,
the mount context will create a dummy missing struct btrfs_device.
Later this device may reappear after the FS is mounted and
then device is
On 2017-12-16 14:50, Dark Penguin wrote:
Could someone please point me towards some read about how btrfs handles
multiple devices? Namely, kicking faulty devices and re-adding them.
I've been using btrfs on single devices for a while, but now I want to
start using it in raid1 mode. I booted
On 2017-12-17 10:48, Peter Grandi wrote:
"Duncan"'s reply is slightly optimistic in parts, so some
further information...
[ ... ]
Basically, at this point btrfs doesn't have "dynamic" device
handling. That is, if a device disappears, it doesn't know
it.
That's just the consequence of what
On 17.12.2017 15:52, Anand Jain wrote:
> In two device configs of RAID1/RAID5 where one device can be missing
> in the degraded mount, or in the configs such as four devices RAID6
> where two devices can be missing, in these type of configs it can form
> two separate set of devices where each of
Commit 17347cec15f919901c90(Btrfs: change how we iterate bios in endio)
mentioned that for dio the submitted bio may be fast cloned, we
can't access the bvec table directly for a cloned bio, so use
bio_get_first_bvec() to retrieve the 1st bvec.
Cc: Chris Mason
Cc: Josef Bacik
BTRFS uses bio->bi_vcnt to figure out page numbers, this way becomes not
correct once we start to enable multipage bvec.
So use bio_nr_pages() to do that instead.
Cc: Chris Mason
Cc: Josef Bacik
Cc: David Sterba
Cc: linux-btrfs@vger.kernel.org
Preparing for supporting multipage bvec.
Cc: Chris Mason
Cc: Josef Bacik
Cc: David Sterba
Cc: linux-btrfs@vger.kernel.org
Signed-off-by: Ming Lei
---
fs/btrfs/compression.c | 5 -
fs/btrfs/extent_io.c | 5 +++--
2 files
On 2017-12-17 08:52, Anand Jain wrote:
In two device configs of RAID1/RAID5 where one device can be missing
in the degraded mount, or in the configs such as four devices RAID6
where two devices can be missing, in these type of configs it can form
two separate set of devices where each of the set
On 18.12.2017 10:49, Anand Jain wrote:
>
>
>> Put another way, the multi-device design is/was based on the
>> demented idea that block-devices that are missing are/should be
>> "remove"d, so that a 2-device volume with a 'raid1' profile
>> becomes a 1-device volume with a 'single'/'dup'
On 17.12.2017 05:04, Anand Jain wrote:
> If the device is not present at the time of (-o degrade) mount,
> the mount context will create a dummy missing struct btrfs_device.
> Later this device may reappear after the FS is mounted and
> then device is included in the device list but it missed
>> I haven't seen that, but I doubt that it is the radical
>> redesign of the multi-device layer of Btrfs that is needed to
>> give it operational semantics similar to those of MD RAID,
>> and that I have vaguely described previously.
> I agree that btrfs volume manager is incomplete in view of
>
On 18.12.2017 12:03, Nikolay Borisov wrote:
> Currently if a mounted-btrfs instance is mounted for the 2nd time
> without first unmounting the first instance then we hit a memory leak
> in btrfs_mount_root due to the fs_info of the acquired superblock is
> different than the newly allocated fs
Currently if a mounted-btrfs instance is mounted for the 2nd time
without first unmounting the first instance then we hit a memory leak
in btrfs_mount_root due to the fs_info of the acquired superblock is
different than the newly allocated fs info. Fix this by specifically
checking if the fs_info
On Fri, 15 Dec 2017 14:12:54 -0500
Josef Bacik wrote:
> From: Josef Bacik
>
> Error injection is sloppy and very ad-hoc. BPF could fill this niche
> perfectly with it's kprobe functionality. We could make sure errors are
> only triggered in specific call
On 2017年12月18日 17:08, Anand Jain wrote:
> Update btrfs_check_rw_degradable() to check against the given
> device if its lost.
>
> We can use this function to know if the volume is going to be
> in degraded mode OR failed state, when the given device fails.
> Which is needed when we are handling
On Fri, 15 Dec 2017 14:12:52 -0500
Josef Bacik wrote:
> From: Josef Bacik
>
> Using BPF we can override kprob'ed functions and return arbitrary
> values. Obviously this can be a bit unsafe, so make this feature opt-in
> for functions. Simply tag a
Update btrfs_check_rw_degradable() to check against the given
device if its lost.
We can use this function to know if the volume is going to be
in degraded mode OR failed state, when the given device fails.
Which is needed when we are handling the device failed state.
A preparatory patch does
formerly missing device - a very big penalty because the whole array
has to be done to catch it up for what might be only a few minutes of
missing time.
For raid1 [1] cli will pick only new chunks.
[1]
btrfs bal start -dprofiles=single -mprofiles=single
Thanks, Anand
--
To unsubscribe
Put another way, the multi-device design is/was based on the
demented idea that block-devices that are missing are/should be
"remove"d, so that a 2-device volume with a 'raid1' profile
becomes a 1-device volume with a 'single'/'dup' profile, and not
a 2-device volume with a missing
On 12.12.2017 18:28, Lakshmipathi.G wrote:
>> Actually 151 has been failing for me as well but not 100
>>
> Okay, can you share the kernel .config file? I'll give it a try
> with those config and check 100, sometime tomorrow.
I've attached my master config before compiling a kernel with it I
74 matches
Mail list logo