The rule originally comes from nocow writing, but snapshot-aware
defrag is a different case, the extent has been writen and we're
not going to change the extent but add a reference on the data.
So we're able to allow such compressed extents to be merged into
one bigger extent if they're pointing t
On 07/31/13 07:39, Tomasz Chmielewski wrote:
> On Wed, 31 Jul 2013 13:13:37 +0800
> Wang Shilong wrote:
>
>>> # git pull origin master
>>
>> Oops, i am sorry, here should:
>>
>> git pull origin qgroup
>>
>> would you please try it again
>
> Excellent, it works:
>
> # btrfs qgroup show -p /mnt/l
On 07/29/13 10:05, Tomasz Chmielewski wrote:
> On Mon, 10 Jun 2013 09:41:39 +0200
> Arne Jansen wrote:
>
>>> Now, my questions:
>>>
>>> - what do both 104882176 104882176 numbers represent?
>>
>> The first number represents the amount of data in that subvolume,
>> regardless whether that data is
On Thu, Aug 1, 2013 at 10:27 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Sandy McArthur posted on Thu, 01 Aug 2013 17:18:50 -0400 as excerpted:
>
>> While exploring some btrfs maintenance with respect to defragmenting I
>> ran the following commands:
>>
>> # filefrag /path/to/34G.file /path/to/5.7G.
On Thu, Aug 01, 2013 at 03:01:37PM -0700, Mark Fasheh wrote:
> On Wed, Jul 31, 2013 at 11:37:46PM +0800, Liu Bo wrote:
> > This aims to add deduplication subcommand, 'btrfs dedup command ',
> > ie. register/unregister'.
> >
> > It can be used to enable or disable dedup support for a filesystem.
>
Sandy McArthur posted on Thu, 01 Aug 2013 17:18:50 -0400 as excerpted:
> While exploring some btrfs maintenance with respect to defragmenting I
> ran the following commands:
>
> # filefrag /path/to/34G.file /path/to/5.7G.file
> /path/to/34G.file: 2406 extents found
> /path/to/5.7G.file: 572 exten
On Thu, Aug 01, 2013 at 05:18:50PM -0400, Sandy McArthur wrote:
> While exploring some btrfs maintenance with respect to defragmenting I
> ran the following commands:
>
> # filefrag /path/to/34G.file /path/to/5.7G.file
> /path/to/34G.file: 2406 extents found
> /path/to/5.7G.file: 572 extents found
> --- a/utils.c
> +++ b/utils.c
> @@ -1476,11 +1475,11 @@ u64 parse_size(char *s)
> return strtoull(s, NULL, 10) * mult;
> }
>
> -int open_file_or_dir(const char *fname)
> +int open_file_or_dir(const char *fname, DIR **dirstream)
> {
> int ret;
> struct stat st;
> - DIR *d
On Mon, Jul 15, 2013 at 07:36:50PM +0800, Wang Shilong wrote:
> valgrind complains open_file_or_dir() causes a memory leak.That is because
> if we open a directoy by opendir(), and then we should call closedir()
> to free memory.
I've reviewed this and don't see a better way how to fix the leaks n
On Wed, Jul 31, 2013 at 11:37:46PM +0800, Liu Bo wrote:
> This aims to add deduplication subcommand, 'btrfs dedup command ',
> ie. register/unregister'.
>
> It can be used to enable or disable dedup support for a filesystem.
This seems to me like it should be a switch on btrfstune instead of a
su
While exploring some btrfs maintenance with respect to defragmenting I
ran the following commands:
# filefrag /path/to/34G.file /path/to/5.7G.file
/path/to/34G.file: 2406 extents found
/path/to/5.7G.file: 572 extents found
Thinking those mostly static files could be less fragmented I ran:
# btrfs
On Thu, Jul 25, 2013 at 10:12:01PM +0800, Eryu Guan wrote:
> The second btrfs command segfaults on big endian host(ppc64)
>
> btrfs subvolume snapshot /mnt/btrfs /mnt/btrfs/snap
> btrfs subvolume list -s /mnt/btrfs
>
> And ltrace shows
>
> localtime(0x10029c482d0)
On Wed, Jul 03, 2013 at 09:25:17PM +0800, Miao Xie wrote:
> --- a/btrfs.c
> +++ b/btrfs.c
> @@ -247,6 +247,7 @@ const struct cmd_group btrfs_cmd_group = {
> { "device", cmd_device, NULL, &device_cmd_group, 0 },
> { "scrub", cmd_scrub, NULL, &scrub_cmd_group, 0 },
>
> So do you mean that our whole hash value will be (key.objectid + bytes)
> because key.objectid is a part of hash value?
I think so, if I understood your question. The idea is to not store the
bytes of the hash that make up the objectid more than once so the tree
items are smaller.
For example:
> There were a few requests to tune the interval. This finally made me to
> finish the patch and will send it in a second.
Great, thanks.
> That's a good point and lowers my worries a bit, though it would be
> interesting to see in what way a beefy machine blows with 300 seconds
> set.
Agreed.
On Wed, Jul 31, 2013 at 10:29:23AM -0400, Josef Bacik wrote:
> There is no reason for this sort of jackassery. Thanks,
>
> Signed-off-by: Josef Bacik
> ---
> fs/btrfs/transaction.c |3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/transaction.c b/fs/btrfs
On Wed, Jul 31, 2013 at 09:35:00AM -0400, Josef Bacik wrote:
> On Wed, Jul 31, 2013 at 02:19:25PM +0200, tim wrote:
> > running btrfs on kernel 3.10.3, i got the following two backtraces,
> > first an info about a hung task and a kernel bug.
> >
> > are these known issues?
That's a crash that app
I'ts hardcoded to 30 seconds which is fine for most users. Higher values
defer data being synced to permanent storage with obvious consequences
when the system crashes. The upper bound is not forced, but a warning is
printed if it's more than 300 seconds (5 minutes).
Signed-off-by: David Sterba
-
I'ts hardcoded to 30 seconds which is fine for most users. Higher values
defer data being synced to permanent storage with obvious consequences
when the system crashes. The upper bound is not forced, but a warning is
printed if it's more than 300 seconds (5 minutes).
Signed-off-by: David Sterba
-
On Wed, Jul 31, 2013 at 03:56:40PM -0700, Zach Brown wrote:
> > I am NO programmer by any stretch. Let's say I want them to be once
> > every 5 min (300 sec). Is the attached patch sane to acheive this?
>
> I think it's a reasonable patch to try, yeah.
There were a few requests to tune the int
Hi Josef,
On Thu, May 30, 2013 at 11:58 PM, Josef Bacik wrote:
> Dave reported a panic because the extent_root->commit_root was NULL in the
> caching kthread. That is because we just unset it in free_root_pointers,
> which
> is not the correct thing to do, we have to either wait for the caching
On 01/08/13 12:08, Hugo Mills wrote:
I thought of using "btrfs device add" and just living with the
untidy underlying devices, but an experiment with loopback
filesystems shows that any data on the new device is silently
obliterated (it might be nice if the docs mentioned this!)
You would e
On Thu, Aug 01, 2013 at 11:53:34AM +0100, Andrew Stubbs wrote:
> If I have two partitions, /dev/sda1 and /dev/sda2, one btrfs, and
> one ext4 (but I could convert it first), how can I merge them into
> one filesystem without moving all the data onto an external device
> and then moving it all back
If I have two partitions, /dev/sda1 and /dev/sda2, one btrfs, and one
ext4 (but I could convert it first), how can I merge them into one
filesystem without moving all the data onto an external device and then
moving it all back again? (I do have a backup, of course, but
transferring the data ta
On Wed, Jul 31, 2013 at 06:30:37PM +0200, Stefan Behrens wrote:
> On Wed, 31 Jul 2013 23:37:46 +0800, Liu Bo wrote:
> > This aims to add deduplication subcommand, 'btrfs dedup command ',
> > ie. register/unregister'.
> >
> > It can be used to enable or disable dedup support for a filesystem.
> >
On Wed, Jul 31, 2013 at 05:20:27PM -0400, Josef Bacik wrote:
> On Wed, Jul 31, 2013 at 11:37:40PM +0800, Liu Bo wrote:
> > Data deduplication is a specialized data compression technique for
> > eliminating
> > duplicate copies of repeating data.[1]
> >
> > This patch set is also related to "Conte
On Wed, Jul 31, 2013 at 03:50:50PM -0700, Zach Brown wrote:
>
> > +#define BTRFS_DEDUP_HASH_SIZE 32 /* 256bit = 32 * 8bit */
> > +#define BTRFS_DEDUP_HASH_LEN 4
> > +
> > +struct btrfs_dedup_hash_item {
> > + /* FIXME: put a hash type field here */
> > +
> > + __le64 hash[BTRFS_DEDUP_HASH_LE
On Thu, 1 Aug 2013 13:35:25 +0800, Wang Shilong wrote:
> Signed-off-by: Wang Shilong
> Signed-off-by: Qu Wenruo
> ---
> man/btrfsck.8.in | 35 ++-
> 1 file changed, 30 insertions(+), 5 deletions(-)
>
> diff --git a/man/btrfsck.8.in b/man/btrfsck.8.in
> index 5004
On Wed, 31 Jul 2013 23:33:59 +0200, Mathijs Kwik wrote:
> Hi all,
>
> For some time, I've successfully deployed btrfs send/receive as a
> viable backup solution.
> It's fast and flexible and nicely scriptable =)
> However, every once in a while, trouble strikes on the receiving end
> with a messag
29 matches
Mail list logo