On Tue, Sep 05, 2017 at 09:21:55AM +0800, Qu Wenruo wrote:
>
>
> On 2017年09月05日 09:05, Marc MERLIN wrote:
> >Ok, I don't want to sound like I'm complaining :) but I updated
> >btrfs-progs to top of tree in git, installed it, and ran it on an 8TiB
> >filesystem that used to take 12H or so to
Duncan posted on Sat, 02 Sep 2017 04:03:06 + as excerpted:
> Austin S. Hemmelgarn posted on Fri, 01 Sep 2017 10:07:47 -0400 as
> excerpted:
>
>> On 2017-09-01 09:54, Qu Wenruo wrote:
>>>
>>> On 2017年09月01日 20:47, Austin S. Hemmelgarn wrote:
>
On 2017-09-01 08:19, Qu Wenruo wrote:
Ok, not quite hours, but check takes 88mn, check --repair takes 11mn
gargamel:/var/local/src/btrfs-progs# time btrfs check /dev/mapper/dshelf1
Checking filesystem on /dev/mapper/dshelf1
UUID: 36f5079e-ca6c-4855-8639-ccb82695c18d
checking extents
checking free space cache
cache and super
This new test checks inspect-internal rootid
- handle path to subvolume/directory/file as an argument
- get different id for each subvolume
- get the expected id for each file/directory
(i.e. the same as containing subvolume)
Signed-off-by: Tomohiro Misono
On 2017年09月05日 09:05, Marc MERLIN wrote:
Ok, I don't want to sound like I'm complaining :) but I updated
btrfs-progs to top of tree in git, installed it, and ran it on an 8TiB
filesystem that used to take 12H or so to check.
How much space allocated for that 8T fs?
If metadata is not that
Hello,
while expecting slow btrfs volumes i switched to kernel v4.13 and to
space_cache=v2.
But i'm still expecting slow performance and single kworker processes
using 100% CPU.
Tracing the kworker process shows me:
# sed 's/.*: //' /trace | sort | uniq -c | sort -n
21595
Ok, I don't want to sound like I'm complaining :) but I updated
btrfs-progs to top of tree in git, installed it, and ran it on an 8TiB
filesystem that used to take 12H or so to check.
It finished in maybe 10mn, just 10mn! :)
gargamel:/var/local/src/btrfs-progs# btrfs check --repair
Add test case which checks if -r|--rootdir mount option can handle
softlink/char/block/fifo files.
Signed-off-by: Qu Wenruo
---
changelog:
v2:
Use mktemp instead of $$
Use default test device size
Remove unnecessary global prereq for basic tools like mkdir
Put
First patch causes test-convert fails. This is because
generate_dataset() creates a name containing trailing spaces for
"slow_symlink" type, and cause getfacl error in convert_test_perm().
(This is not noticed since original run_check_stdout() throws away the
error.)
Fix this by use space for
run_check_stdout() uses "... | tee ... || _fail". However, since tee
won't fail, _fail() is not called even if first command fails.
Fix this by checking PIPESTATUS in the end.
Signed-off-by: Tomohiro Misono
---
tests/common | 7 +--
1 file changed, 5
On Fri, Aug 25, 2017 at 03:23:36PM +0800, Gu Jinxiang wrote:
> make_btrfs is too long to understand, make creatation of root tree
> in a function.
Good cleanup, but the changelog could be improved. There are some
changes not obvious from the changes, like that you now iterate some of
the roots
On Mon, Sep 04, 2017 at 12:43:20PM +0900, Qu Wenruo wrote:
> Add test case which checks if -r|--rootdir mount option can handle
> softlink/char/block/fifo files.
>
> Signed-off-by: Qu Wenruo
Patch 1 applied, please fix up the test as commented below.
> ---
>
On Wed, Aug 30, 2017 at 02:53:22PM -0700, Nick Terrell wrote:
> Adds zstd support to the btrfs program, and a dependency on libzstd >=
> 1.0.0.
I'm afraid we'll have to make the build optional for now, as the distros
may not provide it and I'd like to give at least some heads-up first. My
idea
On Mon, Sep 04, 2017 at 03:41:05PM +0900, Qu Wenruo wrote:
> mkfs.btrfs --rootdir provides user a method to generate btrfs with
> pre-written content while without the need of root privilege.
>
> However the code is quite old and doesn't get much review or test.
> This makes some strange
On Thu, Aug 31, 2017 at 01:00:24PM +0200, Patrik Lundquist wrote:
> Print Device slack: 0.00B
> instead of Device slack: 16.00EiB
>
> Signed-off-by: Patrik Lundquist
Applied, thanks. I've added a test.
--
To unsubscribe from this list:
On Fri, Sep 01, 2017 at 04:14:26PM -0600, Liu Bo wrote:
> Block layer has a limit on plug, ie. BLK_MAX_REQUEST_COUNT == 16, so
> we don't gain benefits by batching 64 bios here.
So this effectively does not change anything on the btrfs side, but we
can remove code that relies on some internal
>>> [ ... ] Currently without any ssds i get the best speed with:
>>> - 4x HW Raid 5 with 1GB controller memory of 4TB 3,5" devices
>>> and using btrfs as raid 0 for data and metadata on top of
>>> those 4 raid 5. [ ... ] the write speed is not as good as i
>>> would like - especially for random
Am 04.09.2017 um 15:28 schrieb Timofey Titovets:
> 2017-09-04 15:57 GMT+03:00 Stefan Priebe - Profihost AG
> :
>> Am 04.09.2017 um 12:53 schrieb Henk Slager:
>>> On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
>>> wrote:
Hello,
On Mon, Sep 04, 2017 at 07:07:25PM +0300, Timofey Titovets wrote:
> 2017-09-04 18:11 GMT+03:00 Adam Borowski :
> > Here's an utility to measure used compression type + ratio on a set of files
> > or directories: https://github.com/kilobyte/compsize
> >
> > It should be of
Hello list,
good time of the day,
More than once I see mentioned in this list that autodefrag option
solves problems with no apparent drawbacks, but it's not the default.
Can you recommend to just switch it on indiscriminately on all
installations?
I'm currently on kernel 4.4, can switch to
Qu Wenruo posted on Mon, 04 Sep 2017 15:41:10 +0900 as excerpted:
> +NOTE: Result btrfs will be shrink to it minimal size, exceeding space will
> not
> +be accessible unless resized.
s/exceeding/additional/
(Exceeding could be used if reworded, but the results I came up with were
much less
On Mon, Sep 04, 2017 at 12:31:54PM +0300, Marat Khalili wrote:
> Hello list,
> good time of the day,
>
> More than once I see mentioned in this list that autodefrag option
> solves problems with no apparent drawbacks, but it's not the
> default. Can you recommend to just switch it on
On Mon, Sep 4, 2017 at 12:34 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> * Autodefrag works very well when these internal-rewrite-pattern files
> are relatively small, say a quarter GiB or less, but, again with near-
> capacity throughput, not necessarily so well with larger databases or VM
>
On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> i'm trying to speed up big btrfs volumes.
>
> Some facts:
> - Kernel will be 4.13-rc7
> - needed volume size is 60TB
>
> Currently without any ssds i get the best speed with:
> - 4x HW Raid 5
>> [ ... ] Currently the write speed is not as good as i would
>> like - especially for random 8k-16k I/O. [ ... ]
> [ ... ] So this 60TB is then 20 4TB disks or so and the 4x 1GB
> cache is simply not very helpful I think. The working set
> doesn't fit in it I guess. If there is mostly single or
On Mon, Sep 4, 2017 at 11:31 AM, Marat Khalili wrote:
> Hello list,
> good time of the day,
>
> More than once I see mentioned in this list that autodefrag option solves
> problems with no apparent drawbacks, but it's not the default. Can you
> recommend to just switch it on
On Mon, Sep 4, 2017 at 7:19 AM, Russell Coker
wrote:
> I have a system with less than 50% disk space used. It just started rejecting
> writes due to lack of disk space. I ran "btrfs balance" and then it started
> working correctly again. It seems that a btrfs
> [ ... ] I ran "btrfs balance" and then it started working
> correctly again. It seems that a btrfs filesystem if left
> alone will eventually get fragmented enough that it rejects
> writes [ ... ]
Free space will get fragmented, because Btrfs has a 2-level
allocator scheme (chunks within
mkfs.btrfs --rootdir provides user a method to generate btrfs with
pre-written content while without the need of root privilege.
However the code is quite old and doesn't get much review or test.
This makes some strange behavior, from customized chunk allocation
(which uses the reserved 0~1M
Follow the original rootdir behavior to shrink the device size to
minimal.
The shrink itself is very simple, since dev extent is allocated on
demand, we just need to shrink the device size to the device extent end
position.
Signed-off-by: Qu Wenruo
---
mkfs/main.c | 107
mkfs.btrfs --rootdir uses its own custom chunk layout.
This provides the possibility to limit the filesystem to a minimal size.
However this custom chunk allocation has several problems.
The most obvious problem is that it will allocate chunk from device offset
0.
Both kernel and normal mkfs will
free_block_group_cache() calls clear_extent_bits() with wrong end, which
is one byte larger than the correct range.
This will cause the next adjacent cache state be split.
And due to the split, private pointer (which points to block group
cache) will be reset to NULL.
This is very hard to detect
Original --rootdir parameter will shrink the filesystem to its minimal
size, which is quite confusing for some users.
Add extra note for this behavior.
Reported-by: Goffredo Baroncelli
Signed-off-by: Qu Wenruo
---
Documentation/mkfs.btrfs.asciidoc |
Since new --rootdir can allocate chunk, it will modify the chunk
allocation result.
This patch will update allocation info before verbose output to reflect
such info.
Signed-off-by: Qu Wenruo
---
mkfs/main.c | 33 +
1 file changed, 33
On Monday, 4 September 2017 2:57:18 PM AEST Stefan Priebe - Profihost AG
wrote:
> > Then roughly make sure the complete set of metadata blocks fits in the
> > cache. For an fs of this size let's say/estimate 150G. Then maybe same
> > of double for data, so an SSD of 500G would be a first try.
>
2017-09-04 15:57 GMT+03:00 Stefan Priebe - Profihost AG :
> Am 04.09.2017 um 12:53 schrieb Henk Slager:
>> On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
>> wrote:
>>> Hello,
>>>
>>> i'm trying to speed up big btrfs volumes.
>>>
>>> Some
Am 04.09.2017 um 12:53 schrieb Henk Slager:
> On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
> wrote:
>> Hello,
>>
>> i'm trying to speed up big btrfs volumes.
>>
>> Some facts:
>> - Kernel will be 4.13-rc7
>> - needed volume size is 60TB
>>
>> Currently
On Sun, Sep 03, 2017 at 08:30:59PM +0200, Hans van Kranenburg wrote:
> On 09/03/2017 08:06 PM, Adam Borowski wrote:
> > On Sun, Sep 03, 2017 at 07:32:01PM +0200, Cloud Admin wrote:
> >> Beside of it, is it possible to find out what the real and compressed size
> >> of a file, for example or the
On Mon, Sep 04, 2017 at 02:05:34PM +0900, Misono, Tomohiro wrote:
> Since cmd_inspect_rootid() calls btrfs_open_dir(), it rejects a file to
> be spcified. But as the document says, a file should be supported.
>
> This patch introduces btrfs_open_file_or_dir(), which is a counterpart
> of
Hi!
Here's an utility to measure used compression type + ratio on a set of files
or directories: https://github.com/kilobyte/compsize
It should be of great help for users, and also if you:
* muck with compression levels
* add new compression types
* add heurestics that could err on withholding
2017-09-04 18:11 GMT+03:00 Adam Borowski :
> Hi!
> Here's an utility to measure used compression type + ratio on a set of files
> or directories: https://github.com/kilobyte/compsize
>
> It should be of great help for users, and also if you:
> * muck with compression levels
>
2017-09-04 21:42 GMT+03:00 Adam Borowski :
> On Mon, Sep 04, 2017 at 07:07:25PM +0300, Timofey Titovets wrote:
>> 2017-09-04 18:11 GMT+03:00 Adam Borowski :
>> > Here's an utility to measure used compression type + ratio on a set of
>> > files
>> > or
2017-09-04 21:32 GMT+03:00 Stefan Priebe - Profihost AG :
>> May be you can make work your raid setup faster by:
>> 1. Use Single Profile
>
> I'm already using the raid0 profile - see below:
If i understand correctly, you have a very big data set with random RW
access, so:
On 9/4/2017 5:11 PM, Adam Borowski wrote:
Hi!
Here's an utility to measure used compression type + ratio on a set of files
or directories: https://github.com/kilobyte/compsize
Great tool. Just tried it on some of my backup snapshots.
# compsize portage.20170904T2200
142432 files.
all
Henk Slager posted on Mon, 04 Sep 2017 13:09:24 +0200 as excerpted:
> On Mon, Sep 4, 2017 at 12:34 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>
>> * Autodefrag works very well when these internal-rewrite-pattern files
>> are relatively small, say a quarter GiB or less, but, again with near-
>>
45 matches
Mail list logo