Hi,

In general, the larger the block / chunk size is, the less dedup can be 
achieved.
1M is already a little bit too big in size.

Thanks,
Xin

 
 

Sent: Friday, December 30, 2016 at 12:28 PM
From: "Peter Becker" <[email protected]>
To: linux-btrfs <[email protected]>
Subject: [markfasheh/duperemove] Why blocksize is limit to 1MB?
Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
I try to dedupe this because the first hundred GB of many files are identical.
With 128KB blocksize with nofiemap and lookup-extends=no option, will
take more then a week (only dedupe, previously hashed). So i tryed -b
100M but this returned me an error: "Blocksize is bounded ...".

The reason is that the blocksize is limit to

#define MAX_BLOCKSIZE (1024U*1024)

But i can't found any description why.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to