Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of erik.ableson
Sent: Thursday, January 21, 2010 6:05 PM
To: zfs-discuss
Subject: [zfs-discuss] Dedup memory overhead
Hi all,
I'm going to be trying out some tests using b130 for dedup
On 21 janv. 2010, at 22:55, Daniel Carosone wrote:
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
What I'm trying to get a handle on is how to estimate the memory
overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
for dedup on that amount of storage. From what I
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
On Thu, Jan 21, 2010 at 10:00 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
What I'm trying to get a handle on is how to estimate the memory
overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This requires:
- time and observation and experience, and
-
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote:
For performance (rather than space) issues, I look at dedup as simply
increasing the size of the working set, with a goal of reducing the
amount of IO (avoided duplicate writes) in return.
I should add and avoided future
On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
Looking at dedupe code, I noticed that on-disk DDT entries are
compressed less efficiently than possible: key is not compressed at
all (I'd expect roughly 2:1 compression ration with sha256 data),
A cryptographic