在 星期三, 2021-02-24 16:51:03 Chengguang Xu 撰写
> 在 星期三, 2021-02-24 15:52:17 Su Yue 撰写
> >
> > Cc to the author and linux-xfs, since it's xfsprogs related.
> >
> > On Tue 23 Feb 2021 at 21:40, Chengguang Xu
> > wrote:
> >
> > > It seems the expected result of
在 星期三, 2021-02-24 17:22:35 Su Yue 撰写
>
> On Wed 24 Feb 2021 at 16:51, Chengguang Xu
> wrote:
>
> > 在 星期三, 2021-02-24 15:52:17 Su Yue 撰写
> >
> > >
> > > Cc to the author and linux-xfs, since it's xfsprogs related.
> > >
> > > On Tue 23 Feb 2021 at 21:40, C
hi there,
As a newbie in Btrfs land i installed a raid1 configuration and played with it.
After hot removing a drive of the raid i got two remarks that i cannot find
answer with with my google foo.
So, to not die stupid i post a little email here to see if there is solutions
about this :)
From: Nikolay Borisov
[ Upstream commit 9db4dc241e87fccd8301357d5ef908f40b50f2e3 ]
It's currently u64 which gets instantly translated either to LONG_MAX
(if U64_MAX is passed) or cast to an unsigned long (which is in fact,
wrong because writeback_control::nr_to_write is a signed, long type).
Ju
From: Josef Bacik
[ Upstream commit e19eb11f4f3d3b0463cd897016064a79cb6d8c6d ]
I've been running a stress test that runs 20 workers in their own
subvolume, which are running an fsstress instance with 4 threads per
worker, which is 80 total fsstress threads. In addition to this I'm
running balan
From: Josef Bacik
[ Upstream commit 4f4317c13a40194940acf4a71670179c4faca2b5 ]
While doing error injection I would sometimes get a corrupt file system.
This is because I was injecting errors at btrfs_search_slot, but would
only do it one time per stack. This uncovered a problem in
commit_fs_roo
From: Josef Bacik
[ Upstream commit 4f4317c13a40194940acf4a71670179c4faca2b5 ]
While doing error injection I would sometimes get a corrupt file system.
This is because I was injecting errors at btrfs_search_slot, but would
only do it one time per stack. This uncovered a problem in
commit_fs_roo
From: Josef Bacik
[ Upstream commit 4f4317c13a40194940acf4a71670179c4faca2b5 ]
While doing error injection I would sometimes get a corrupt file system.
This is because I was injecting errors at btrfs_search_slot, but would
only do it one time per stack. This uncovered a problem in
commit_fs_roo
From: Nikolay Borisov
[ Upstream commit 9db4dc241e87fccd8301357d5ef908f40b50f2e3 ]
It's currently u64 which gets instantly translated either to LONG_MAX
(if U64_MAX is passed) or cast to an unsigned long (which is in fact,
wrong because writeback_control::nr_to_write is a signed, long type).
Ju
From: Josef Bacik
[ Upstream commit 4f4317c13a40194940acf4a71670179c4faca2b5 ]
While doing error injection I would sometimes get a corrupt file system.
This is because I was injecting errors at btrfs_search_slot, but would
only do it one time per stack. This uncovered a problem in
commit_fs_roo
On Wed, Feb 24, 2021 at 05:37:20PM +0800, Chengguang Xu wrote:
> 在 星期三, 2021-02-24 17:22:35 Su Yue 撰写
> >
> > On Wed 24 Feb 2021 at 16:51, Chengguang Xu
> > wrote:
> >
> > > 在 星期三, 2021-02-24 15:52:17 Su Yue 撰写
> > >
> > > >
> > > > Cc to the author and linux
在 星期三, 2021-02-24 21:31:46 Eryu Guan 撰写
> On Wed, Feb 24, 2021 at 05:37:20PM +0800, Chengguang Xu wrote:
> > 在 星期三, 2021-02-24 17:22:35 Su Yue 撰写
> > >
> > > On Wed 24 Feb 2021 at 16:51, Chengguang Xu
> > > wrote:
> > >
> > > > 在 星期三, 2021-02-24 15:52:1
On Tue, Feb 23, 2021 at 10:05 AM Josef Bacik wrote:
>
> On 2/22/21 11:03 PM, Neal Gompa wrote:
> > On Mon, Feb 22, 2021 at 2:34 PM Josef Bacik wrote:
> >>
> >> On 2/21/21 1:27 PM, Neal Gompa wrote:
> >>> On Wed, Feb 17, 2021 at 11:44 AM Josef Bacik wrote:
>
> On 2/17/21 11:29 AM, Neal
On 2/24/21 9:23 AM, Neal Gompa wrote:
On Tue, Feb 23, 2021 at 10:05 AM Josef Bacik wrote:
On 2/22/21 11:03 PM, Neal Gompa wrote:
On Mon, Feb 22, 2021 at 2:34 PM Josef Bacik wrote:
On 2/21/21 1:27 PM, Neal Gompa wrote:
On Wed, Feb 17, 2021 at 11:44 AM Josef Bacik wrote:
On 2/17/21 11:29
Last week I was curious to just see how btrfs is faring with RAID5 in
xfstests, so I set it up for a quick run with devices configured as:
TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
and fired off ./check -
On 2021-02-24 01:20, Anand Jain wrote:
On 24/02/2021 01:35, Johannes Thumshirn wrote:
On 23/02/2021 18:20, Steven Davies wrote:
On 2021-02-23 14:30, David Sterba wrote:
On Tue, Feb 23, 2021 at 09:43:04AM +, Johannes Thumshirn wrote:
On 23/02/2021 10:13, Johannes Thumshirn wrote:
On 22/02
On Wed, Feb 24, 2021 at 07:50:13AM -0500, Sasha Levin wrote:
> From: Josef Bacik
>
> [ Upstream commit e19eb11f4f3d3b0463cd897016064a79cb6d8c6d ]
>
> I've been running a stress test that runs 20 workers in their own
> subvolume, which are running an fsstress instance with 4 threads per
> worker,
On Wed, Feb 24, 2021 at 07:50:12AM -0500, Sasha Levin wrote:
> From: Nikolay Borisov
>
> [ Upstream commit 9db4dc241e87fccd8301357d5ef908f40b50f2e3 ]
>
> It's currently u64 which gets instantly translated either to LONG_MAX
> (if U64_MAX is passed) or cast to an unsigned long (which is in fact,
On Mon, Feb 22, 2021 at 07:17:49AM -0600, Goldwyn Rodrigues wrote:
> force_nocow can be calculated by btrfs_inode and does not need to be
> passed as an argument.
>
> This simplifies run_delalloc_nocow() call from btrfs_run_delalloc_range()
> where should_nocow() checks for BTRFS_INODE_NODATASUM a
On Mon, Feb 15, 2021 at 08:38:20PM +0530, Maheep Kumar Kathuria wrote:
> Fixed a coding style issue in thresh_exec_hook()
>
> Signed-off-by: Maheep Kumar Kathuria
> ---
> fs/btrfs/async-thread.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/async-thread.c b
On Sat, Feb 20, 2021 at 10:06:33AM +0800, Qu Wenruo wrote:
> Due to the pagecache limit of 32bit systems, btrfs can't access metadata
> at or beyond 16T boundary correctly.
>
> And unlike other fses, btrfs uses internally mapped u64 address space for
> all of its metadata, this is more tricky than
On 19:56 24/02, David Sterba wrote:
> On Mon, Feb 22, 2021 at 07:17:49AM -0600, Goldwyn Rodrigues wrote:
> > force_nocow can be calculated by btrfs_inode and does not need to be
> > passed as an argument.
> >
> > This simplifies run_delalloc_nocow() call from btrfs_run_delalloc_range()
> > where s
On 2/24/21 10:12 AM, Eric Sandeen wrote:
> Last week I was curious to just see how btrfs is faring with RAID5 in
> xfstests, so I set it up for a quick run with devices configured as:
Whoops this was supposed to cc: fstests, not fsdevel, sorry.
-Eric
> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk
The compression options in Btrfs are great, and help save a ton of
space on disk. Zstandard works extremely well for this, and is fairly
fast. However, it can heavily reduce the speed of quick disks, does
not work well on lower-end systems, and does not scale well across
multiple cores. Zlib is eve
On 2021/2/25 上午3:18, David Sterba wrote:
On Sat, Feb 20, 2021 at 10:06:33AM +0800, Qu Wenruo wrote:
Due to the pagecache limit of 32bit systems, btrfs can't access metadata
at or beyond 16T boundary correctly.
And unlike other fses, btrfs uses internally mapped u64 address space for
all of i
On 25/02/2021 05:39, Eric Sandeen wrote:
On 2/24/21 10:12 AM, Eric Sandeen wrote:
Last week I was curious to just see how btrfs is faring with RAID5 in
xfstests, so I set it up for a quick run with devices configured as:
Whoops this was supposed to cc: fstests, not fsdevel, sorry.
-Eric
TES
Due to the pagecache limit of 32bit systems, btrfs can't access metadata
at or beyond (ULONG_MAX + 1) << PAGE_SHIFT.
This is 16T for 4K page size while 256T for 64K page size.
And unlike other fses, btrfs uses internally mapped u64 address space for
all of its metadata, this is more tricky than ot
On 12/02/2021 22:36, David Sterba wrote:
On Wed, Feb 10, 2021 at 09:25:18PM -0800, Anand Jain wrote:
Move the static function scrub_checksum_tree_block() before its use in
the scrub.c, and drop its declaration.
No functional changes.
We've rejected patches that move static function within
On 2/24/21 7:16 PM, Anand Jain wrote:
> On 25/02/2021 05:39, Eric Sandeen wrote:
>> On 2/24/21 10:12 AM, Eric Sandeen wrote:
>>> Last week I was curious to just see how btrfs is faring with RAID5 in
>>> xfstests, so I set it up for a quick run with devices configured as:
>>
>> Whoops this was suppo
On 2021/2/25 上午9:46, Eric Sandeen wrote:
On 2/24/21 7:16 PM, Anand Jain wrote:
On 25/02/2021 05:39, Eric Sandeen wrote:
On 2/24/21 10:12 AM, Eric Sandeen wrote:
Last week I was curious to just see how btrfs is faring with RAID5 in
xfstests, so I set it up for a quick run with devices config
On 2/24/21 7:56 PM, Qu Wenruo wrote:
>
>
> On 2021/2/25 上午9:46, Eric Sandeen wrote:
>> On 2/24/21 7:16 PM, Anand Jain wrote:
>>> On 25/02/2021 05:39, Eric Sandeen wrote:
On 2/24/21 10:12 AM, Eric Sandeen wrote:
> Last week I was curious to just see how btrfs is faring with RAID5 in
>
On 2021/2/25 上午10:45, Eric Sandeen wrote:
On 2/24/21 7:56 PM, Qu Wenruo wrote:
On 2021/2/25 上午9:46, Eric Sandeen wrote:
On 2/24/21 7:16 PM, Anand Jain wrote:
On 25/02/2021 05:39, Eric Sandeen wrote:
On 2/24/21 10:12 AM, Eric Sandeen wrote:
Last week I was curious to just see how btrfs i
On 2/24/21 9:13 PM, Qu Wenruo wrote:
> Now this makes way more sense,
Sorry for the earlier mistake.
> as your previous comment on
> _btrfs_forget_or_module_reload is completely correct.
>
> _btrfs_forget_or_module_reload will really forget all devices, while
> what we really want is just exclu
On 2021/2/25 上午11:15, Eric Sandeen wrote:
On 2/24/21 9:13 PM, Qu Wenruo wrote:
Now this makes way more sense,
Sorry for the earlier mistake.
as your previous comment on
_btrfs_forget_or_module_reload is completely correct.
_btrfs_forget_or_module_reload will really forget all devices, w
On Wed, Feb 24, 2021 at 10:44 AM Josef Bacik wrote:
>
> On 2/24/21 9:23 AM, Neal Gompa wrote:
> > On Tue, Feb 23, 2021 at 10:05 AM Josef Bacik wrote:
> >>
> >> On 2/22/21 11:03 PM, Neal Gompa wrote:
> >>> On Mon, Feb 22, 2021 at 2:34 PM Josef Bacik wrote:
>
> On 2/21/21 1:27 PM, Neal G
While playing with seed device(misc/next and v5.11), lockdep
complains the following:
To reproduce:
dev1=/dev/sdb1
dev2=/dev/sdb2
umount /mnt
mkfs.btrfs -f $dev1
btrfstune -S 1 $dev1
mount $dev1 /mnt
btrfs device add $dev2 /mnt/ -f
umount /mnt
mount $dev2 /mnt
umount /mnt
Warning:
On Tue, Feb 23, 2021 at 8:49 AM Sebastian Roller
wrote:
>
> Hello all.
> Sorry for asking here directly, but I'm in a desperate situation and
> out of options.
> I have a 72 TB btrfs filesystem which functions as a backup drive.
> After a recent controller hardware failure while the backup was
> r
On Wed, Feb 24, 2021 at 10:40 PM Chris Murphy wrote:
>
> I think you best chance is to start out trying to restore from a
> recent snapshot. As long as the failed controller wasn't writing
> totally spurious data in random locations, that snapshot should be
> intact.
i.e. the strategy for this is
There are some btrfs test cases utilizing
_btrfs_forget_or_module_reload() to unregister all btrfs devices.
However _btrfs_forget_or_module_reload() will unregister all devices,
meaning if TEST_DEV is part of a multi-device btrfs, after those test
cases TEST_DEV will no longer be mountable.
This
On Thu, Feb 18, 2021 at 08:20:18AM -0800, Darrick J. Wong wrote:
> > I think a nested call like this is necessary. That's why I use the open
> > code way.
>
> This might be a good place to implement an iomap_apply2() loop that
> actually /does/ walk all the extents of file1 and file2. There's no
Hi,
xfstest btrfs/154 failed at kernel 5.4.100
frequency: always
kernel version: 5.4.100, other kernel version yet not tested.
xfstest: https://github.com/kdave/xfstests.git
btrfs-progs: 5.10.1
but mkfs.btrfs default enable no-holes and free-space-tree.
# ./check btrfs/154
FSTYP
41 matches
Mail list logo