> 1M is already a little bit too big in size.

Not in my usecase :)

Is it right the this isn't an limit in btrfs? So i can patch this and try 100M.
The reason is, that i must dedupe the whole 8 TB in less then a day
but with 128K and 1M blocksize it will take a week.

I don't know why adding extends take so long.
I/O during adding extends is less then 4MB/s, and CPU (dual core) and
memory (8 GB) usage are less then 20%, on bare metal.

2017-01-01 5:38 GMT+01:00 Xin Zhou <[email protected]>:
> Hi,
>
> In general, the larger the block / chunk size is, the less dedup can be 
> achieved.
> 1M is already a little bit too big in size.
>
> Thanks,
> Xin
>
>
>
>
> Sent: Friday, December 30, 2016 at 12:28 PM
> From: "Peter Becker" <[email protected]>
> To: linux-btrfs <[email protected]>
> Subject: [markfasheh/duperemove] Why blocksize is limit to 1MB?
> Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
> I try to dedupe this because the first hundred GB of many files are identical.
> With 128KB blocksize with nofiemap and lookup-extends=no option, will
> take more then a week (only dedupe, previously hashed). So i tryed -b
> 100M but this returned me an error: "Blocksize is bounded ...".
>
> The reason is that the blocksize is limit to
>
> #define MAX_BLOCKSIZE (1024U*1024)
>
> But i can't found any description why.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to