On 2018/02/16 4:04, Omar Sandoval wrote:
> From: Omar Sandoval
> +PUBLIC enum btrfs_util_error btrfs_util_create_subvolume_iterator(const char
> *path,
> + uint64_t top,
> +
On 23.02.2018 01:39, David Sterba wrote:
> On Thu, Feb 22, 2018 at 12:24:40PM -0700, Liu Bo wrote:
> Not even that far, isize is truncated before calling inode_dio_wait()
> and a memory barrier is set to ensure the correct order, so dio read
> would simply return if it's reading past
On 2018/02/16 4:04, Omar Sandoval wrote:
> From: Omar Sandoval
> +PUBLIC enum btrfs_util_error btrfs_util_subvolume_path_fd(int fd, uint64_t
> id,
> + char **path_ret)
> +{
> + char *path, *p;
> + size_t capacity =
On 2018/02/16 4:05, Omar Sandoval wrote:
> From: Omar Sandoval
> +static struct subvol_list *btrfs_list_deleted_subvols(int fd,
> + struct
> btrfs_list_filter_set *filter_set)
> +{
> + struct subvol_list *subvols = NULL;
>
On 2018/02/16 4:04, Omar Sandoval wrote:
> From: Omar Sandoval
>
> Signed-off-by: Omar Sandoval
> ---
> libbtrfsutil/btrfsutil.h| 21 +++
> libbtrfsutil/python/btrfsutilpy.h | 3 +
> libbtrfsutil/python/module.c
On 2018年02月23日 09:12, Holger Hoffstätte wrote:
> On 02/22/18 05:52, Qu Wenruo wrote:
>> btrfs_read_block_groups() is used to build up the block group cache for
>> all block groups, so it will iterate all block group items in extent
>> tree.
>>
>> For large filesystem (TB level), it will search
On 02/22/18 05:52, Qu Wenruo wrote:
> btrfs_read_block_groups() is used to build up the block group cache for
> all block groups, so it will iterate all block group items in extent
> tree.
>
> For large filesystem (TB level), it will search for BLOCK_GROUP_ITEM
> thousands times, which is the
On 02/22/18 05:52, Qu Wenruo wrote:
> btrfs_read_block_groups() is used to build up the block group cache for
> all block groups, so it will iterate all block group items in extent
> tree.
>
> For large filesystem (TB level), it will search for BLOCK_GROUP_ITEM
> thousands times, which is the
On Thu, Feb 22, 2018 at 04:33:24AM +, Tomasz Kłoczko wrote:
> -- Forwarded message --
> From:
> Date: 21 February 2018 at 16:22
> Subject: [Bug 1547319] 4.16.0-0.rc1.git4.1.fc28.x86_64 #1 Not tainted:
> possible recursive locking detected
> To:
If there's no hung task listed in dmesg you could try to do sysrq+t to
find out what everything's up to, although then you have to learn how
to parse the result.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
The only thing I can think of is something's updating metadata due to
relatime mount option. Maybe try noatime? At 9GiB, really it's 4.5GiB
because whatever is being written is being doubled by raid1 profile
and multiple devices. There is a case of wandering trees where a
little bit of change can
On Thu, Feb 22, 2018 at 12:24:40PM -0700, Liu Bo wrote:
> > > > Not even that far, isize is truncated before calling inode_dio_wait()
> > > > and a memory barrier is set to ensure the correct order, so dio read
> > > > would simply return if it's reading past isize.
> > >
> > > Please, describe
On 2018年02月23日 00:31, Ellis H. Wilson III wrote:
> On 02/21/2018 11:56 PM, Qu Wenruo wrote:
>> On 2018年02月22日 12:52, Qu Wenruo wrote:
>>> btrfs_read_block_groups() is used to build up the block group cache for
>>> all block groups, so it will iterate all block group items in extent
>>> tree.
>>>
On 2018年02月23日 06:44, Jeff Mahoney wrote:
> On 12/22/17 1:18 AM, Qu Wenruo wrote:
>> Unlike reservation calculation used in inode rsv for metadata, qgroup
>> doesn't really need to care things like csum size or extent usage for
>> whole tree COW.
>>
>> Qgroup care more about net change of extent
On 12/22/17 1:18 AM, Qu Wenruo wrote:
> Unlike reservation calculation used in inode rsv for metadata, qgroup
> doesn't really need to care things like csum size or extent usage for
> whole tree COW.
>
> Qgroup care more about net change of extent usage.
> That's to say, if we're going to insert
On Thu, Feb 22, 2018 at 12:09:45PM -0700, Liu Bo wrote:
> On Thu, Feb 22, 2018 at 08:49:30AM +0200, Nikolay Borisov wrote:
> >
> >
> > On 22.02.2018 00:38, Liu Bo wrote:
> > > On Wed, Feb 21, 2018 at 07:05:13PM +, Filipe Manana wrote:
> > >> On Wed, Feb 21, 2018 at 6:28 PM, Liu Bo
On Thu, Feb 22, 2018 at 08:49:30AM +0200, Nikolay Borisov wrote:
>
>
> On 22.02.2018 00:38, Liu Bo wrote:
> > On Wed, Feb 21, 2018 at 07:05:13PM +, Filipe Manana wrote:
> >> On Wed, Feb 21, 2018 at 6:28 PM, Liu Bo wrote:
> >>> On Wed, Feb 21, 2018 at 02:42:08PM +,
Hi,
I've been using btrfs for some time now on my server and am pretty
satisfied with it's performance and features. I'm running Ubuntu 16.04
64bit with kernel 4.4.0-112.
The other day I installed collectd, InfulxDB and Grafana on my server.
I was surprised to see on the graphs, that there
On 02/21/2018 11:56 PM, Qu Wenruo wrote:
On 2018年02月22日 12:52, Qu Wenruo wrote:
btrfs_read_block_groups() is used to build up the block group cache for
all block groups, so it will iterate all block group items in extent
tree.
For large filesystem (TB level), it will search for
Now that the read side is extracted into its own function, do the same
to the write side. This leaves btrfs_get_blocks_direct_write with the
sole purpose of handling common locking required. Also flip the
condition in btrfs_get_blocks_direct_write so that the write case
comes first and we check
Currently this function handles both the READ and WRITE dio cases. This
is facilitated by a bunch of 'if' statements, a goto short-circuit
statement and a very perverse aliasing of "!created"(READ) case
by setting lockstart = lockend and checking for lockstart < lockend for
detecting the write.
btrfs inspect dump-tree cli picks the disk with the largest generation
to read the root tree, even when all the devices were not provided in
the cli. But in 2 disks RAID1 you may need to know what's in the disks
individually, so this option -x | --degraded indicates to use only the
given disk to
Moving between opposite endianness will report bogus numbers in sysfs,
and mount may fail as the root will not be restored correctly. If the
filesystem is always used on a same endian host, this will not be a
problem.
Fix this by using the btrfs_set_super...() functions to set
fs_info::super_copy
On 2018-02-21 10:56, Hans van Kranenburg wrote:
On 02/21/2018 04:19 PM, Ellis H. Wilson III wrote:
$ sudo btrfs fi df /mnt/btrfs
Data, single: total=3.32TiB, used=3.32TiB
System, DUP: total=8.00MiB, used=384.00KiB
Metadata, DUP: total=16.50GiB, used=15.82GiB
GlobalReserve, single:
On Wed, Feb 21, 2018 at 10:38 PM, Liu Bo wrote:
> On Wed, Feb 21, 2018 at 07:05:13PM +, Filipe Manana wrote:
>> On Wed, Feb 21, 2018 at 6:28 PM, Liu Bo wrote:
>> > On Wed, Feb 21, 2018 at 02:42:08PM +, Filipe Manana wrote:
>> >> On Wed, Feb 21,
On 22.02.2018 11:23, Qu Wenruo wrote:
>
>
> [snip]
>>> -}
>>> -
>>> void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
>>> {
>>> struct btrfs_block_group_cache *block_group;
>>> @@ -9988,12 +9934,15 @@ int btrfs_read_block_groups(struct btrfs_fs_info
>>> *info)
>>> {
>>>
[snip]
>> -}
>> -
>> void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
>> {
>> struct btrfs_block_group_cache *block_group;
>> @@ -9988,12 +9934,15 @@ int btrfs_read_block_groups(struct btrfs_fs_info
>> *info)
>> {
>> struct btrfs_path *path;
>> int ret;
>> +
On 22.02.2018 06:52, Qu Wenruo wrote:
> btrfs_read_block_groups() is used to build up the block group cache for
> all block groups, so it will iterate all block group items in extent
> tree.
>
> For large filesystem (TB level), it will search for BLOCK_GROUP_ITEM
> thousands times, which is the
28 matches
Mail list logo