Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Adam Leventhal
Hi Giridhar,

The size reported by ls can include things like holes in the file. What space 
usage does the zfs(1M) command report for the filesystem?

Adam

On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:

 Hi,
 
 Reposting as I have not gotten any response.
 
 Here is the issue. I created a zpool with 64k recordsize and enabled dedupe 
 on it.
 --zpool create -O recordsize=64k TestPool device1
 --zfs set dedup=on TestPool
 
 I copied files onto this pool over nfs from a windows client.
 
 Here is the output of zpool list
 -- zpool list
 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
 TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
 
 I ran ls -l /TestPool and saw the total size reported as 51,193,782,290 
 bytes.
 The alloc size reported by zpool along with the DEDUP of 1.13x does not addup 
 to 51,193,782,290 bytes.
 
 According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G 
 (19.1G * 1.13) 
 
 Here is the output from zdb -DD
 
 -- zdb -DD TestPool
 DDT-sha256-zap-duplicate: 33536 entries, size 272 on disk, 140 in core
 DDT-sha256-zap-unique: 278241 entries, size 274 on disk, 142 in core
 
 DDT histogram (aggregated over all DDTs):
 
 bucket allocated referenced
 __ __ __
 refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
 -- -- - - - -- - - -
 1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G
 2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G
 4 15 960K 960K 960K 71 4.44M 4.44M 4.44M
 8 4 256K 256K 256K 53 3.31M 3.31M 3.31M
 16 1 64K 64K 64K 16 1M 1M 1M
 512 1 64K 64K 64K 854 53.4M 53.4M 53.4M
 1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M
 4K 1 64K 64K 64K 5.33K 341M 341M 341M
 Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G
 
 dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13
 
 
 Am I missing something?
 
 Your inputs are much appritiated.
 
 Thanks,
 Giri
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Giridhar K R
 Hi Giridhar,
 
 The size reported by ls can include things like holes
 in the file. What space usage does the zfs(1M)
 command report for the filesystem?
 
 Adam
 
 On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
 
  Hi,
  
  Reposting as I have not gotten any response.
  
  Here is the issue. I created a zpool with 64k
 recordsize and enabled dedupe on it.
  --zpool create -O recordsize=64k TestPool device1
  --zfs set dedup=on TestPool
  
  I copied files onto this pool over nfs from a
 windows client.
  
  Here is the output of zpool list
  -- zpool list
  NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
  TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
  
  I ran ls -l /TestPool and saw the total size
 reported as 51,193,782,290 bytes.
  The alloc size reported by zpool along with the
 DEDUP of 1.13x does not addup to 51,193,782,290
 bytes.
  
  According to the DEDUP (Dedupe ratio) the amount of
 data copied is 21.58G (19.1G * 1.13) 
  
  Here is the output from zdb -DD
  
  -- zdb -DD TestPool
  DDT-sha256-zap-duplicate: 33536 entries, size 272
 on disk, 140 in core
  DDT-sha256-zap-unique: 278241 entries, size 274 on
 disk, 142 in core
  
  DDT histogram (aggregated over all DDTs):
  
  bucket allocated referenced
  __ __
 __
  refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE
 DSIZE
  -- -- - - - -- - -
 -
  1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G
  2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G
  4 15 960K 960K 960K 71 4.44M 4.44M 4.44M
  8 4 256K 256K 256K 53 3.31M 3.31M 3.31M
  16 1 64K 64K 64K 16 1M 1M 1M
  512 1 64K 64K 64K 854 53.4M 53.4M 53.4M
  1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M
  4K 1 64K 64K 64K 5.33K 341M 341M 341M
  Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G
  
  dedup = 1.13, compress = 1.00, copies = 1.00, dedup
 * compress / copies = 1.13
  
  
  Am I missing something?
  
  Your inputs are much appritiated.
  
  Thanks,
  Giri
  -- 
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
 
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss
 
 
 --
 Adam Leventhal, Fishworks
http://blogs.sun.com/ahl
 _
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

Thanks for the response Adam.

Are you talking about ZFS list?

It displays 19.6 as allocated space.

What does ZFS treat as hole and how does it identify?

Thanks,
Giri
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Adam Leventhal
 Thanks for the response Adam.
 
 Are you talking about ZFS list?
 
 It displays 19.6 as allocated space.
 
 What does ZFS treat as hole and how does it identify?

ZFS will compress blocks of zeros down to nothing and treat them like
sparse files. 19.6 is pretty close to your computed. Does your pool
happen to be 10+1 RAID-Z?

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Giridhar K R
I used the default while creating zpool with one disk drive. I guess it is a 
RAID 0 configuration.

Thanks,
Giri
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-16 Thread Giridhar K R
Hi,

Reposting as I have not gotten any response.

Here is the issue. I created a zpool with 64k recordsize and enabled dedupe on 
it.
--zpool create -O recordsize=64k TestPool device1
--zfs set dedup=on TestPool

I copied files onto this pool over nfs from a windows client.

Here is the output of zpool list
-- zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -

I ran ls -l /TestPool and saw the total size reported as 51,193,782,290 bytes.
The alloc size reported by zpool along with the DEDUP of 1.13x does not addup 
to 51,193,782,290 bytes.

According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G 
(19.1G * 1.13) 

Here is the output from zdb -DD

-- zdb -DD TestPool
DDT-sha256-zap-duplicate: 33536 entries, size 272 on disk, 140 in core
DDT-sha256-zap-unique: 278241 entries, size 274 on disk, 142 in core

DDT histogram (aggregated over all DDTs):

bucket allocated referenced
__ __ __
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
-- -- - - - -- - - -
1 272K 17.0G 17.0G 17.0G 272K 17.0G 17.0G 17.0G
2 32.7K 2.05G 2.05G 2.05G 65.6K 4.10G 4.10G 4.10G
4 15 960K 960K 960K 71 4.44M 4.44M 4.44M
8 4 256K 256K 256K 53 3.31M 3.31M 3.31M
16 1 64K 64K 64K 16 1M 1M 1M
512 1 64K 64K 64K 854 53.4M 53.4M 53.4M
1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M
4K 1 64K 64K 64K 5.33K 341M 341M 341M
Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G

dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13


Am I missing something?

Your inputs are much appritiated.

Thanks,
Giri
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-15 Thread Henrik Johansson
Hello,

On Dec 15, 2009, at 8:02 AM, Giridhar K R wrote:

 Hi,
 Created a zpool with 64k recordsize and enabled dedupe on it.
 zpool create -O recordsize=64k TestPool device1
 zfs set dedup=on TestPool
 
 I copied files onto this pool over nfs from a windows client.
 
 Here is the output of zpool list
 Prompt:~# zpool list
 NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 TestPool   696G  19.1G   677G 2%  1.13x  ONLINE  -
 
 When I ran a dir /s command on the share from a windows client cmd, I see 
 the file size as 51,193,782,290 bytes. The alloc size reported by zpool along 
 with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes.
 
 According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G 
 (19.1G * 1.13) 

Are you sure this problem is related to ZFS, not a Windows, link or CIFS issue? 
 Have you looked at the filesystem from the OpenSolaris host locally? Are sure 
there are no links in the filesystems that the windows client  also counts? 

Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-15 Thread Giridhar K R
As I have noted above after editing the initial post, its the same locally too.

I found that the ls -l on the zpool also reports 51,193,782,290 bytes
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-14 Thread Giridhar K R
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
 zpool create -O recordsize=64k TestPool device1
 zfs set dedup=on TestPool

I copied files onto this pool over nfs from a windows client.

Here is the output of zpool list
Prompt:~# zpool list
NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
TestPool   696G  19.1G   677G 2%  1.13x  ONLINE  -

When I ran a dir /s command on the share from a windows client cmd, I see the 
file size as 51,193,782,290 bytes. The alloc size reported by zpool along with 
the DEDUP of 1.13x does not addup to 51,193,782,290 bytes.

According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G 
(19.1G * 1.13) 

Here is the output from zdb -DD

Prompt:~# zdb -DD TestPool
DDT-sha256-zap-duplicate: 33536 entries, size 272 on disk, 140 in core
DDT-sha256-zap-unique: 278241 entries, size 274 on disk, 142 in core

DDT histogram (aggregated over all DDTs):

bucket  allocated   referenced
__   __   __
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
--   --   -   -   -   --   -   -   -
 1 272K   17.0G   17.0G   17.0G 272K   17.0G   17.0G   17.0G
 232.7K   2.05G   2.05G   2.05G65.6K   4.10G   4.10G   4.10G
 4   15960K960K960K   71   4.44M   4.44M   4.44M
 84256K256K256K   53   3.31M   3.31M   3.31M
161 64K 64K 64K   16  1M  1M  1M
   5121 64K 64K 64K  854   53.4M   53.4M   53.4M
1K1 64K 64K 64K1.08K   69.1M   69.1M   69.1M
4K1 64K 64K 64K5.33K341M341M341M
 Total 304K   19.0G   19.0G   19.0G 345K   21.5G   21.5G   21.5G

dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13


Am I missing something?

Your inputs are much appritiated.

Thanks,
Giri
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss