Re: [zfs-discuss] Dedup memory overhead

2010-02-04 Thread Mertol Ozyoney
Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of erik.ableson Sent: Thursday, January 21, 2010 6:05 PM To: zfs-discuss Subject: [zfs-discuss] Dedup memory overhead Hi all, I'm going to be trying out some tests using b130 for dedup

Re: [zfs-discuss] Dedup memory overhead

2010-01-22 Thread erik.ableson
On 21 janv. 2010, at 22:55, Daniel Carosone wrote: On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote: What I'm trying to get a handle on is how to estimate the memory overhead required for dedup on that amount of storage. We'd all appreciate better visibility of this. This

[zfs-discuss] Dedup memory overhead

2010-01-21 Thread erik.ableson
Hi all, I'm going to be trying out some tests using b130 for dedup on a server with about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What I'm trying to get a handle on is how to estimate the memory overhead required for dedup on that amount of storage. From what I

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Richard Elling
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote: Hi all, I'm going to be trying out some tests using b130 for dedup on a server with about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What I'm trying to get a handle on is how to estimate the memory overhead required

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Andrey Kuzmin
On Thu, Jan 21, 2010 at 10:00 PM, Richard Elling richard.ell...@gmail.com wrote: On Jan 21, 2010, at 8:04 AM, erik.ableson wrote: Hi all, I'm going to be trying out some tests using b130 for dedup on a server with about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks).  What

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote: What I'm trying to get a handle on is how to estimate the memory overhead required for dedup on that amount of storage. We'd all appreciate better visibility of this. This requires: - time and observation and experience, and -

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Daniel Carosone
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote: For performance (rather than space) issues, I look at dedup as simply increasing the size of the working set, with a goal of reducing the amount of IO (avoided duplicate writes) in return. I should add and avoided future

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Mike Gerdts
On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin andrey.v.kuz...@gmail.com wrote: Looking at dedupe code, I noticed that on-disk DDT entries are compressed less efficiently than possible: key is not compressed at all (I'd expect roughly 2:1 compression ration with sha256 data), A cryptographic