Gentle ping.
This patch can be applied independently and is pretty critical to expose
rescan and transaction race.
It would be quite an important part for crafting test case.
Thanks,
Qu
On 2018年03月08日 10:40, je...@suse.com wrote:
> From: Jeff Mahoney
>
> This patch adds a new
When debugging quota rescan race, some times btrfs rescan could account
some old (committed) leaf and then re-account newly committed leaf
in next generation.
This race needs extra transid to locate, so add @transid for
trace_btrfs_qgroup_account_extent() for such debug.
Signed-off-by: Qu Wenruo
Goffredo Baroncelli posted on Wed, 02 May 2018 22:40:27 +0200 as
excerpted:
> Anyway, my "rant" started when Ducan put near the missing of parity
> checksum and the write hole. The first might be a performance problem.
> Instead the write hole could lead to a loosing data. My intention was to
>
Gandalf Corvotempesta posted on Wed, 02 May 2018 19:25:41 + as
excerpted:
> On 05/02/2018 03:47 AM, Duncan wrote:
>> Meanwhile, have you looked at zfs? Perhaps they have something like
>> that?
>
> Yes, i've looked at ZFS and I'm using it on some servers but I don't
> like it too much for
On Mon, Apr 30, 2018 at 11:56 AM, Jayashree Mohan
wrote:
> Hi Jeff,
>
> On Mon, Apr 30, 2018 at 11:51 AM, Jeff Mahoney wrote:
>> On 4/30/18 12:04 PM, Vijay Chidambaram wrote:
>>> Hi,
>>>
>>> We found two more cases where the btrfs behavior is a little
On 05/02/2018 11:20 PM, waxhead wrote:
>
>
[...]
>
> Ok, before attempting and answer I have to admit that I do not know enough
> about how RAID56 is laid out on disk in BTRFS terms. Is data checksummed pr.
> stripe or pr. disk? Is parity calculated on the data only or is it calculated
> on
Andrei Borzenkov wrote:
02.05.2018 21:17, waxhead пишет:
Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ?
On the best of my knowledge nothing. In any case the data is
checksummed so it is impossible to
From: Jeff Mahoney
btrfs_init_work clears the work struct except for ->wq, so the memset
before calling btrfs_init_work in qgroup_rescan_init is unnecessary.
We'll also initialize ->wq in btrfs_init_work so that it's obvious.
Signed-off-by: Jeff Mahoney
---
From: Jeff Mahoney
If we fail to allocate memory for a path, don't bother trying to
insert the qgroup status item. We haven't done anything yet and it'll
fail also. Just print an error and be done with it.
Signed-off-by: Jeff Mahoney
---
fs/btrfs/qgroup.c | 9
From: Jeff Mahoney
Hi Dave -
Here's the updated patchset for the rescan races. This fixes the issue
where we'd try to start multiple workers. It introduces a new "ready"
bool that we set during initialization and clear while queuing the worker.
The queuer is also now
From: Jeff Mahoney
Commit 8d9eddad194 (Btrfs: fix qgroup rescan worker initialization)
fixed the issue with BTRFS_IOC_QUOTA_RESCAN_WAIT being racy, but
ended up reintroducing the hang-on-unmount bug that the commit it
intended to fix addressed.
The race this time is between
On 05/02/2018 09:29 PM, Austin S. Hemmelgarn wrote:
> On 2018-05-02 13:25, Goffredo Baroncelli wrote:
>> On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the
best of my knowledge nothing. In any case the data is
On 2 May 2018 at 20:43, Paul Richards wrote:
> The issue I have now is that the filesystem cannot be unmounted.
> "umount" reports "target is busy", but I cannot find anything relevant
> with "lsof" or "fuser" (this is a very quiet home NAS). Is this
> something related
On 29 April 2018 at 02:50, Qu Wenruo wrote:
>
>
> On 2018年04月29日 04:16, Paul Richards wrote:
>> On 28 April 2018 at 20:39, Patrik Lundquist
>> wrote:
>>> On 28 April 2018 at 20:54, Paul Richards wrote:
Hi,
On 2018-05-02 13:25, Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing. In any case the data is checksummed so it is
impossible to return corrupted data (modulo bug :-) ).
On 05/02/2018 03:47 AM, Duncan wrote:
> Meanwhile, have you looked at zfs? Perhaps they have something like that?
Yes, i've looked at ZFS and I'm using it on some servers but I don't like
it too much for multiple reasons, in example:
1) is not officially in kernel, we have to build a module
On 05/02/2018 08:17 PM, waxhead wrote:
> Goffredo Baroncelli wrote:
>> On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the
best of my knowledge nothing. In any case the data is checksummed so it is
impossible to
Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing. In any case the data is checksummed so it is
impossible to return corrupted data (modulo bug :-) ).
I am not a BTRFS
On 05/02/2018 06:55 PM, waxhead wrote:
>>
>> So again, which problem would solve having the parity checksummed ? On the
>> best of my knowledge nothing. In any case the data is checksummed so it is
>> impossible to return corrupted data (modulo bug :-) ).
>>
> I am not a BTRFS dev , but this
On 2018-05-02 12:55, waxhead wrote:
Goffredo Baroncelli wrote:
Hi
On 05/02/2018 03:47 AM, Duncan wrote:
Gandalf Corvotempesta posted on Tue, 01 May 2018 21:57:59 + as
excerpted:
Hi to all I've found some patches from Andrea Mazzoleni that adds
support up to 6 parity raid.
Why these are
Goffredo Baroncelli wrote:
Hi
On 05/02/2018 03:47 AM, Duncan wrote:
Gandalf Corvotempesta posted on Tue, 01 May 2018 21:57:59 + as
excerpted:
Hi to all I've found some patches from Andrea Mazzoleni that adds
support up to 6 parity raid.
Why these are wasn't merged ?
With modern disk size,
Hi
On 05/02/2018 03:47 AM, Duncan wrote:
> Gandalf Corvotempesta posted on Tue, 01 May 2018 21:57:59 + as
> excerpted:
>
>> Hi to all I've found some patches from Andrea Mazzoleni that adds
>> support up to 6 parity raid.
>> Why these are wasn't merged ?
>> With modern disk size, having
On 5/2/18 9:15 AM, David Sterba wrote:
> On Wed, May 02, 2018 at 12:29:28PM +0200, David Sterba wrote:
>> On Thu, Apr 26, 2018 at 03:23:49PM -0400, je...@suse.com wrote:
>>> From: Jeff Mahoney
>>> +static void queue_rescan_worker(struct btrfs_fs_info *fs_info)
>>> +{
>>> +
On Wed, May 02, 2018 at 12:29:28PM +0200, David Sterba wrote:
> On Thu, Apr 26, 2018 at 03:23:49PM -0400, je...@suse.com wrote:
> > From: Jeff Mahoney
> > +static void queue_rescan_worker(struct btrfs_fs_info *fs_info)
> > +{
> > + mutex_lock(_info->qgroup_rescan_lock);
> > +
On 2018年05月02日 20:49, Nikolay Borisov wrote:
>
>
> On 2.05.2018 15:29, Qu Wenruo wrote:
>>
>>
>> On 2018年05月02日 19:52, Nikolay Borisov wrote:
>>> Originally commit 2681e00f00fe ("btrfs-progs: check for matchingi
>>> free space in cache") added the account_super_bytes function to prevent
>>>
On 2.05.2018 15:29, Qu Wenruo wrote:
>
>
> On 2018年05月02日 19:52, Nikolay Borisov wrote:
>> Originally commit 2681e00f00fe ("btrfs-progs: check for matchingi
>> free space in cache") added the account_super_bytes function to prevent
>> false negative when running btrfs check. Turns out this
On 2018年05月02日 19:52, Nikolay Borisov wrote:
> Originally commit 2681e00f00fe ("btrfs-progs: check for matchingi
> free space in cache") added the account_super_bytes function to prevent
> false negative when running btrfs check. Turns out this function is
> really copied exclude_super_stripes,
Now that the read side is extracted into its own function, do the same
to the write side. This leaves btrfs_get_blocks_direct_write with the
sole purpose of handling common locking required. Also flip the
condition in btrfs_get_blocks_direct_write so that the write case
comes first and we check
Currently this function handles both the READ and WRITE dio cases. This
is facilitated by a bunch of 'if' statements, a goto short-circuit
statement and a very perverse aliasing of "!created"(READ) case
by setting lockstart = lockend and checking for lockstart < lockend for
detecting the write.
Originally commit 2681e00f00fe ("btrfs-progs: check for matchingi
free space in cache") added the account_super_bytes function to prevent
false negative when running btrfs check. Turns out this function is
really copied exclude_super_stripes, excluding the calls to
exclude_super_stripes. Later
On Thu, Apr 26, 2018 at 03:23:49PM -0400, je...@suse.com wrote:
> From: Jeff Mahoney
> +static void queue_rescan_worker(struct btrfs_fs_info *fs_info)
> +{
> + mutex_lock(_info->qgroup_rescan_lock);
> + if (btrfs_fs_closing(fs_info)) {
> +
On 2.05.2018 08:28, Qu Wenruo wrote:
> Error message from qgroup_rescan_init() mostly looks like:
>
> --
> BTRFS info (device nvme0n1p1): qgroup_rescan_init failed with -115
> --
>
> Which is far from meaningful, and sometimes confusing as for above
> -EINPROGRESS it's mostly (despite
32 matches
Mail list logo