On 2017年09月11日 14:05, shally verma wrote:
I was going through BTRFS Deduplication page
(https://btrfs.wiki.kernel.org/index.php/Deduplication) and I read
"As such, xfs_io, is able to perform deduplication on a BTRFS file system," ..
following this, I followed on to xfs_io link https://linux.
Function find_next_chunk() is used to find next chunk start position,
which should only do search on chunk tree and objectid is fixed to
BTRFS_FIRST_CHUNK_TREE_OBJECTID.
So refactor the parameter list to get rid of @root, which should be get
from fs_info->chunk_root, and @objectid, which is fixed
Add extra limitation explained for --rootdir option, including:
1) Size limitation
Now I decide to follow "mkfs.ext4 -d" behavior, so we user is
responsible to make sure the block device/file is large enough.
2) Read permission
If user can't read the content, mkfs will just fail.
So us
--rootdir option will start a transaction to fill the fs, however if
something goes wrong, from ENOSPC to lack of permission, we won't commit
transaction and cause BUG_ON trigger by uncommitted transaction:
--
extent buffer leak: start 29392896 len 16384
extent_io.c:579: free_extent_buffer: BU
Since new --rootdir can allocate chunk, it will modify the chunk
allocation result.
This patch will update allocation info before verbose output to reflect
such info.
Signed-off-by: Qu Wenruo
---
mkfs/main.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/
mkfs.btrfs --rootdir uses its own custom chunk layout.
This provides the possibility to limit the filesystem to a minimal size.
However this custom chunk allocation has several problems.
The most obvious problem is that it will allocate chunk from device offset
0.
Both kernel and normal mkfs will
When passing directory larger than block device using --rootdir
parameter, we get the following backtrace:
--
extent-tree.c:2693: btrfs_reserve_extent: BUG_ON `ret` triggered, value -28
./mkfs.btrfs(+0x1a05d)[0x557939e6b05d]
./mkfs.btrfs(btrfs_reserve_extent+0xb5a)[0x557939e710c8]
./mkfs.btrfs
free_block_group_cache() calls clear_extent_bits() with wrong end, which
is one byte larger than the correct range.
This will cause the next adjacent cache state be split.
And due to the split, private pointer (which points to block group
cache) will be reset to NULL.
This is very hard to detect
mkfs.btrfs --rootdir provides user a method to generate btrfs with
pre-written content while without the need of root privilege to mount
the fs.
However the code is quite old and doesn't get much review or test.
This makes some strange behavior, from customized chunk allocation
(which uses the res
I was going through BTRFS Deduplication page
(https://btrfs.wiki.kernel.org/index.php/Deduplication) and I read
"As such, xfs_io, is able to perform deduplication on a BTRFS file system," ..
following this, I followed on to xfs_io link https://linux.die.net/man/8/xfs_io
As I understand, these a
...and can it be related to the Samsung 840 SSD's not supporting NCQ
Trim? (Although I can't tell which device this trace is from -- it
could be a mechanical Western Digital.)
On Sun, Sep 10, 2017 at 10:16 PM, Rich Rauenzahn wrote:
> Is this something to be concerned about?
>
> I'm running the l
Marc reported that "btrfs check --repair" runs much faster than "btrfs
check", which is quite weird.
This patch will add time elapsed for each major tree it checked, for
both original mode and lowmem mode, so we can have a clue what's going
wrong.
Reported-by: Marc MERLIN
Signed-off-by: Qu Wenru
On 2017年09月10日 22:34, Martin Raiber wrote:
Hi,
On 10.09.2017 08:45 Qu Wenruo wrote:
On 2017年09月10日 14:41, Qu Wenruo wrote:
On 2017年09月10日 07:50, Rohan Kadekodi wrote:
Hello,
I was trying to understand how file renames are handled in Btrfs. I
read the code documentation, but had a probl
This patch updates btrfs-completion:
- add "filesystem du" and "rescure zero-log"
- restrict _btrfs_mnts to show btrfs type only
- add more completion in last case statements
(This file contains both spaces/tabs and may need cleanup.)
Signed-off-by: Tomohiro Misono
---
btrfs-completion | 43
Is this something to be concerned about?
I'm running the latest mainline kernel on CentOS 7.
[ 1338.882288] [ cut here ]
[ 1338.883058] WARNING: CPU: 2 PID: 790 at fs/btrfs/ctree.h:1559
btrfs_update_device+0x1c5/0x1d0 [btrfs]
[ 1338.883809] Modules linked in: xt_nat veth i
10.09.2017 23:17, Dmitrii Tcvetkov пишет:
>>> Drive1 Drive2Drive3
>>> X X
>>> X X
>>> X X
>>>
>>> Where X is a chunk of raid1 block group.
>>
>> But this table clearly shows that adding third drive increases free
>> space by 50%.
On 2017年09月10日 22:32, Rohan Kadekodi wrote:
Thank you for the prompt and elaborate answers! However, I think I was
unclear in my questions, and I apologize for the confusion.
What I meant was that for a file rename, when I check the blktrace
output, there are 2 writes of 256KB each starting fr
On Sun, Sep 10, 2017 at 01:16:26PM +, Josef Bacik wrote:
> Great, if the free space cache is fucked again after the next go
> around then I need to expand the verifier to watch entries being added
> to the cache as well. Thanks,
Well, I copied about 1TB of data, and nothing happened.
So it se
FLJ posted on Sun, 10 Sep 2017 15:45:42 +0200 as excerpted:
> I have a BTRFS RAID1 volume running for the past year. I avoided all
> pitfalls known to me that would mess up this volume. I never
> experimented with quotas, no-COW, snapshots, defrag, nothing really.
> The volume is a RAID1 from day
Am Sun, 10 Sep 2017 20:15:52 +0200
schrieb Ferenc-Levente Juhos :
> >Problem is that each raid1 block group contains two chunks on two
> >separate devices, it can't utilize fully three devices no matter
> >what. If that doesn't suit you then you need to add 4th disk. After
> >that FS will be able
> > Drive1 Drive2Drive3
> > X X
> > X X
> > X X
> >
> > Where X is a chunk of raid1 block group.
>
> But this table clearly shows that adding third drive increases free
> space by 50%. You need to reallocate data to actually mak
10.09.2017 19:11, Dmitrii Tcvetkov пишет:
>> Actually based on http://carfax.org.uk/btrfs-usage/index.html I
>> would've expected 6 TB of usable space. Here I get 6.4 which is odd,
>> but that only 1.5 TB is available is even stranger.
>>
>> Could anyone explain what I did wrong or why my expectati
10.09.2017 18:47, Kai Krakow пишет:
> Am Sun, 10 Sep 2017 15:45:42 +0200
> schrieb FLJ :
>
>> Hello all,
>>
>> I have a BTRFS RAID1 volume running for the past year. I avoided all
>> pitfalls known to me that would mess up this volume. I never
>> experimented with quotas, no-COW, snapshots, defrag
>Problem is that each raid1 block group contains two chunks on two
>separate devices, it can't utilize fully three devices no matter what.
>If that doesn't suit you then you need to add 4th disk. After
>that FS will be able to use all unallocated space on all disks in raid1
>profile. But even then
> @Kai and Dmitrii
> thank you for your explanations if I understand you correctly, you're
> saying that btrfs makes no attempt to "optimally" use the physical
> devices it has in the FS, once a new RAID1 block group needs to be
> allocated it will semi-randomly pick two devices with enough space a
@Kai and Dmitrii
thank you for your explanations if I understand you correctly, you're
saying that btrfs makes no attempt to "optimally" use the physical
devices it has in the FS, once a new RAID1 block group needs to be
allocated it will semi-randomly pick two devices with enough space and
allocat
>Actually based on http://carfax.org.uk/btrfs-usage/index.html I
>would've expected 6 TB of usable space. Here I get 6.4 which is odd,
>but that only 1.5 TB is available is even stranger.
>
>Could anyone explain what I did wrong or why my expectations are wrong?
>
>Thank you in advance
I'd say df
Am Sun, 10 Sep 2017 15:45:42 +0200
schrieb FLJ :
> Hello all,
>
> I have a BTRFS RAID1 volume running for the past year. I avoided all
> pitfalls known to me that would mess up this volume. I never
> experimented with quotas, no-COW, snapshots, defrag, nothing really.
> The volume is a RAID1 from
On Sat, Sep 09, 2017 at 10:43:16PM +0300, Andrei Borzenkov wrote:
> 09.09.2017 16:44, Ulli Horlacher пишет:
> >
> > Your tool does not create .snapshot subdirectories in EVERY directory like
>
> Neither does NetApp. Those "directories" are magic handles that do not
> really exist.
Correct, than
Hi,
On 10.09.2017 08:45 Qu Wenruo wrote:
>
>
> On 2017年09月10日 14:41, Qu Wenruo wrote:
>>
>>
>> On 2017年09月10日 07:50, Rohan Kadekodi wrote:
>>> Hello,
>>>
>>> I was trying to understand how file renames are handled in Btrfs. I
>>> read the code documentation, but had a problem understanding a few
>
Thank you for the prompt and elaborate answers! However, I think I was
unclear in my questions, and I apologize for the confusion.
What I meant was that for a file rename, when I check the blktrace
output, there are 2 writes of 256KB each starting from byte number:
13373440
When I check btrfs-deb
> As I am writing some documentation abount creating snapshots:
> Is there a generic name for both volume and subvolume root?
Yes, it is from the UNIX side 'root directory' and from the
Btrfs side 'subvolume'. Like some other things Btrfs, its
terminology is often inconsistent, but "volume" *usual
Hello all,
I have a BTRFS RAID1 volume running for the past year. I avoided all
pitfalls known to me that would mess up this volume. I never
experimented with quotas, no-COW, snapshots, defrag, nothing really.
The volume is a RAID1 from day 1 and is working reliably until now.
Until yesterday it
On Sun, Sep 10, 2017 at 02:01:58PM +0800, Qu Wenruo wrote:
>
>
> On 2017年09月10日 01:44, Marc MERLIN wrote:
> > So, should I assume that btrfs progs git has some issue since there is
> > no plausible way that a check --repair should be faster than a regular
> > check?
>
> Yes, the assumption that
Great, if the free space cache is fucked again after the next go around then I
need to expand the verifier to watch entries being added to the cache as well.
Thanks,
Josef
Sent from my iPhone
> On Sep 10, 2017, at 9:14 AM, Marc MERLIN wrote:
>
>> On Sun, Sep 10, 2017 at 03:12:16AM +, Jo
On Sun, Sep 10, 2017 at 03:12:16AM +, Josef Bacik wrote:
> Ok mount -o clear_cache, umount and run fsck again just to make sure. Then
> if it comes out clean mount with ref_verify again and wait for it to blow up
> again. Thanks,
Ok, just did the 2nd fsck, came back clean after mount -o c
On 2017年09月10日 19:19, Christophe JAILLET wrote:
If 'btrfs_alloc_path()' fails, we must free the resourses already
allocated, as done in the other error handling paths in this function.
Signed-off-by: Christophe JAILLET
Reviewed-by: Qu Wenruo
BTW, I also checked all btrfs_alloc_path() in s
If 'btrfs_alloc_path()' fails, we must free the resourses already
allocated, as done in the other error handling paths in this function.
Signed-off-by: Christophe JAILLET
---
fs/btrfs/tests/free-space-tree-tests.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/tes
On 10 September 2017 at 08:33, Marat Khalili wrote:
> It doesn't need replaced disk to be readable, right?
Only enough to be mountable, which it already is, so your read errors
on /dev/sdb isn't a problem.
> Then what prevents same procedure to work without a spare bay?
It is basically the same
Perhaps netapp is using a VFS overlay. There is really only one snapshot but it
is shown in the overlay on every folder. Kind of the same with samba Shadow
Copies.
From: Ulli Horlacher -- Sent: 2017-09-09 -
21:52
> On Sat 2017-09-09 (22:43), Andrei Borzenkov wrote:
>
>> > Your tool
40 matches
Mail list logo