hi,
On 10/14/2016 09:59 PM, Holger Hoffstätte wrote:
On 10/06/16 04:51, Wang Xiaoguang wrote:
When testing btrfs compression, sometimes we got ENOSPC error, though fs
still has much free space, xfstests generic/171, generic/172, generic/173,
generic/174, generic/175 can reveal this bug in my
Add a new subcommand to btrfs inspect-internal
btrfs inspect-internal bg_analysis
Gives information about all the block groups.
Signed-off-by: Divya Indi
Reviewed-by: Ashish Samant
Reviewed-by: Liu Bo
---
cmds-inspect.c
An efficient alternative to retrieving block groups:
get_chunks(): Walk the chunk tree to retrieve the chunks.
get_bg_info(): For each retrieved chunk, lookup an exact match of block
group in the extent tree.
Signed-off-by: Divya Indi
Reviewed-by: Ashish Samant
These patches aim to add 2 new subcommands that:
-> provide information about block groups
-> help to decide whether balance can reduce the no. of data block groups and if
it can, provides the block group object id for "-dvrange"
[PATCH 1/3] btrfs-progs: Generic functions to retrieve chunks and
Add new subcommand to btrfs inspect-internal
btrfs inspect-internal balance_check
Checks whether 'btrfs balance' can help creating more space (Only
considers data block groups).
Signed-off-by: Divya Indi
Reviewed-by: Ashish Samant
Reviewed-by:
Hi,
I have a BTRFS setup with 8 HDD stripped with RAID5. I know its mark as
experimental, as it has problem with power failures. I decided to give it a
shot. Last night one of the disks in the array failed, the OS decided to
remount the FS into read-only mode. The data was still accessible. I
On Sat, Oct 15, 2016 at 08:42:40PM -0400, Dave Jones wrote:
On Thu, Oct 13, 2016 at 05:18:46PM -0400, Chris Mason wrote:
> > > > .. and of course the first thing that happens is a completely different
> > > > btrfs trace..
> > > >
> > > >
> > > > WARNING: CPU: 1 PID: 21706 at
Commit 62b99540a1d91e464 (btrfs: relocation: Fix leaking qgroups numbers
on data extents) only fixes the problem partly.
The previous fix is to trace all new data extents at transaction commit
time when balance finishes.
However balance is not done in a large transaction, every path
replacement
Rename btrfs_qgroup_insert_dirty_extent(_nolock) to
btrfs_qgroup_trace_extent(_nolock), according to the new
reserve/trace/account naming schema.
Signed-off-by: Qu Wenruo
---
fs/btrfs/delayed-ref.c | 2 +-
fs/btrfs/extent-tree.c | 6 +++---
Move account_shared_subtree() to qgroup.c and rename it to
btrfs_qgroup_trace_subtree().
Do the same thing for account_leaf_items() and rename it to
btrfs_qgroup_trace_leaf_items().
Since all these functions are only for qgroup, move them to qgroup.c and
export them is more appropriate.
Add explain how btrfs qgroup works.
Qgroup is split into 3 main phrases:
1) Reserve
To ensure qgroup doesn't exceed its limit
2) Trace
To info qgroup to trace which extent
3) Account
Calculate qgroup number change for each traced extent.
This should save quite some time for new
The patchset does the following things:
1) Enhance comment for qgroup, rename 2 functions
Explain the how qgroup works, so new developers won't waste too much
time digging into the boring codes.
The qgroup work flow is split into 3 main phrases:
Reverse, Trace, Account.
And rename
At 10/18/2016 08:35 AM, Divya Indi wrote:
An efficient alternative to retrieving block groups:
get_chunks(): Walk the chunk tree to retrieve the chunks.
get_bg_info(): For each retrieved chunk, lookup an exact match of block
group in the extent tree.
Signed-off-by: Divya Indi
At 10/18/2016 08:35 AM, Divya Indi wrote:
Add new subcommand to btrfs inspect-internal
btrfs inspect-internal balance_check
Checks whether 'btrfs balance' can help creating more space (Only
considers data block groups).
I didn't think it's good to add a new subcommand just for that.
Why
At 10/18/2016 08:35 AM, Divya Indi wrote:
Add a new subcommand to btrfs inspect-internal
btrfs inspect-internal bg_analysis
Gives information about all the block groups.
Signed-off-by: Divya Indi
Reviewed-by: Ashish Samant
Reviewed-by: Liu
Clarify the behavior of the dedupe ioctl.
Signed-off-by: Darrick J. Wong
---
man2/ioctl_ficlonerange.2 |2 +-
man2/ioctl_fideduperange.2 | 26 ++
2 files changed, 23 insertions(+), 5 deletions(-)
diff --git a/man2/ioctl_ficlonerange.2
Add a blurb to the fallocate manpage explaining that the fallocate
command with the UNSHARE mode flag may use CoW to unshare blocks to
guarantee that a disk write won't fail with ENOSPC.
Signed-off-by: Darrick J. Wong
---
man2/fallocate.2 | 10 ++
1 file
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it should
have 6 devices, which is how it knows a device is missing.
Missing - means missing at the time of mount. So how are you planning
to monitor a disk
Stefan Priebe - Profihost AG posted on Mon, 17 Oct 2016 08:50:37 +0200 as
excerpted:
> Am 17.10.2016 um 03:50 schrieb Qu Wenruo:
>> At 10/17/2016 02:54 AM, Stefan Priebe - Profihost AG wrote:
>>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
Hi,
On 10/15/2016 10:49 PM, Stefan
On Tue, 18 Oct 2016 09:39:32 +0800
Qu Wenruo wrote:
> > static const char * const cmd_inspect_inode_resolve_usage[] = {
> > "btrfs inspect-internal inode-resolve [-v] ",
> > "Get file system paths for the given inode",
> > @@ -702,6 +814,8 @@ const struct
On 10/17/2016 06:53 PM, Andrei Borzenkov wrote:
> I try to understand how to build a tree of snapshots (i.e. - which
> subvolume was used to snapshot/clone other subvolume). What is the
> correct way to determine it? In particular, "btrfs sub list -p" always
> prints something for "parent
On Mon, Oct 17, 2016 at 06:44:14PM +0200, Stefan Malte Schumacher wrote:
> Hello
>
> I would like to monitor my btrfs-filesystem for missing drives. On
> Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and
> sends an email if anything is wrong with the array. I would like to do
>
On Mon, Oct 17, 2016 at 9:44 AM, Stefan Malte Schumacher
wrote:
> Hello
>
> I would like to monitor my btrfs-filesystem for missing drives. On
> Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and
> sends an email if anything is wrong with the
On Sat, Oct 15, 2016 at 11:18:37PM -0700, Christoph Hellwig wrote:
> On Sat, Oct 15, 2016 at 10:03:03AM -0700, Christoph Hellwig wrote:
> > The poster child would be btrfs, and I would have added some output
> > here if btrfs support in xfstests wasn't completely broken at this
> > point.
> >
> >
Am 17.10.2016 um 03:50 schrieb Qu Wenruo:
> At 10/17/2016 02:54 AM, Stefan Priebe - Profihost AG wrote:
>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>>> Hi,
>>>
>>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
cp --reflink=always takes sometimes very long. (i.e.
On Wed, Oct 12, 2016 at 11:12:42AM +0800, Wang Xiaoguang wrote:
> hi,
>
> Stefan often reports enospc error in his servers when having btrfs
> compression
> enabled. Now he has applied these 2 patches to run and no enospc error
> occurs
> for more than 6 days, it seems they are useful :)
>
>
On Thu, Sep 08, 2016 at 03:12:49PM +0800, Qu Wenruo wrote:
> This patchset can be fetched from github:
> https://github.com/adam900710/linux.git wang_dedupe_20160907
Can you please publish the patchset in a branch that does not change
name and is not based on for-next? I'm' going to do less
On Thu, Oct 13, 2016 at 09:47:11AM +0100, Filipe Manana wrote:
> > Since the crash is similar to the call chains from Jeff's report,
> > ie.
> > btrfs_del_csums
> > -> btrfs_search_slot
> > -> btrfs_cow_block
> > -> btrfs_mark_buffer_dirty
> >
> > I just wonder that whether
On Thu, Oct 13, 2016 at 09:23:39AM +0800, Wang Xiaoguang wrote:
> Indeed this just make the behavior similar to xfs when process has
> fatal signals pending, and it'll make fstests/generic/298 happy.
>
> Signed-off-by: Wang Xiaoguang
Reviewed-by: David Sterba
On Mon, Oct 17, 2016 at 09:20:25AM +0800, Junjie Mao wrote:
> Fixes: 4246a0b63bd8 ("block: add a bi_error field to struct bio")
>
> Signed-off-by: Junjie Mao
Ack, but please resend it with CC: sta...@vger.kernel.org # 4.3+
--
To unsubscribe from this list: send the line
On Mon, Oct 17, 2016 at 03:00:25PM +0200, David Sterba wrote:
> On Thu, Oct 13, 2016 at 09:47:11AM +0100, Filipe Manana wrote:
> > > Since the crash is similar to the call chains from Jeff's report,
> > > ie.
> > > btrfs_del_csums
> > > -> btrfs_search_slot
> > > -> btrfs_cow_block
> > >
From: Filipe Manana
Hi Chris, please pull the following send fix for the 4.9 kernel.
Doing snapshots while balance is ongoing can result in file extent items whose
only difference between snapshots is their bytenr, which got replaced by the
relocation process, and the inode
Hello
I would like to monitor my btrfs-filesystem for missing drives. On
Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and
sends an email if anything is wrong with the array. I would like to do
the same with btrfs. In my first attempt I grepped and cut the
information from
I try to understand how to build a tree of snapshots (i.e. - which
subvolume was used to snapshot/clone other subvolume). What is the
correct way to determine it? In particular, "btrfs sub list -p" always
prints something for "parent snapshot", while btrfs sub list -q" only
prints parent_uuid for
On 2016-10-17 12:44, Stefan Malte Schumacher wrote:
Hello
I would like to monitor my btrfs-filesystem for missing drives. On
Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and
sends an email if anything is wrong with the array. I would like to do
the same with btrfs. In my
May be better to use /sys/fs/btrfs//devices to find the device
to monitor, and then monitor them with blktrace - maybe there's some
courser granularity available there, I'm not sure. The thing is, as
far as Btrfs alone is concerned, a drive can be "bad" and you're
effectively degraded, while the
I've been following that thread. It's been my fear.
I'm in the process of doing a restore of what I can get off of it so that i can
re-create the file system with raid1 which, if i'm reading that thread
correctly doesn't suffer at all from the rmw problems extant in the raid5/6
code at the
37 matches
Mail list logo