Sorry fort he late answer.
Approximately it's 150 bytes per individual block. So increasing the
blocksize is a good idea.
Also when L1 and L2 arc is not enough system will start making disk IOPS and
RaidZ is not very effective for random IOPS and it's likely that when your
dram is not enough
On 21 janv. 2010, at 22:55, Daniel Carosone wrote:
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
What I'm trying to get a handle on is how to estimate the memory
overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
On Thu, Jan 21, 2010 at 10:00 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
What I'm trying to get a handle on is how to estimate the memory
overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This requires:
- time and observation and experience, and
-
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote:
For performance (rather than space) issues, I look at dedup as simply
increasing the size of the working set, with a goal of reducing the
amount of IO (avoided duplicate writes) in return.
I should add and avoided future
On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
Looking at dedupe code, I noticed that on-disk DDT entries are
compressed less efficiently than possible: key is not compressed at
all (I'd expect roughly 2:1 compression ration with sha256 data),
A cryptographic