Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread BJ Quinn
Oh.  Yup, I had figured this out on my own but forgot to post back.  --inplace 
accomplishes what we're talking about.  --no-whole-file is also necessary if 
copying files locally (not over the network), because rsync does default to 
only copying changed blocks, but it overrides that default behavior when not 
copying over the network.

Also, has anyone figured out a best-case blocksize to use with rsync?  I tried 
zfs get volblocksize [pool], but it just returns -.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread Darren J Moffat
BJ Quinn wrote:
 Oh.  Yup, I had figured this out on my own but forgot to post back.  
 --inplace accomplishes what we're talking about.  --no-whole-file is also 
 necessary if copying files locally (not over the network), because rsync does 
 default to only copying changed blocks, but it overrides that default 
 behavior when not copying over the network.
 
 Also, has anyone figured out a best-case blocksize to use with rsync?  I 
 tried zfs get volblocksize [pool], but it just returns -.

zfs get recordsize dataset


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread BJ Quinn
Should I set that as rsync's block size?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread BJ Quinn
Here's an idea - I understand that I need rsync on both sides if I want to 
minimize network traffic.  What if I don't care about that - the entire file 
can come over the network, but I specifically only want rsync to write the 
changed blocks to disk.  Does rsync offer a mode like that?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Bob Friesenhahn
On Mon, 24 Nov 2008, BJ Quinn wrote:

 Here's an idea - I understand that I need rsync on both sides if I 
 want to minimize network traffic.  What if I don't care about that - 
 the entire file can come over the network, but I specifically only 
 want rsync to write the changed blocks to disk.  Does rsync offer a 
 mode like that? -- This message posted from opensolaris.org

My understanding is that the way rsync works, if a file already 
exists, then checksums are computed for ranges of the file, and the 
data is only sent/updated if that range is determined to have changed. 
While you can likely configure rsync to send the whole file, I think 
that it does what you want by default.

This is very easy for you to test for yourself.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Erik Trimble
Bob Friesenhahn wrote:
 On Mon, 24 Nov 2008, BJ Quinn wrote:

   
 Here's an idea - I understand that I need rsync on both sides if I 
 want to minimize network traffic.  What if I don't care about that - 
 the entire file can come over the network, but I specifically only 
 want rsync to write the changed blocks to disk.  Does rsync offer a 
 mode like that? -- This message posted from opensolaris.org
 

 My understanding is that the way rsync works, if a file already 
 exists, then checksums are computed for ranges of the file, and the 
 data is only sent/updated if that range is determined to have changed. 
 While you can likely configure rsync to send the whole file, I think 
 that it does what you want by default.

 This is very easy for you to test for yourself.

 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

This is indeed the default mode for rsync (deltas only).  The '-W' 
option forces copy of the entire file, rather than just the changes. I 
_believe_ the standard checksum block size is 4kb, but I'm not really 
sure. (it's buried in the documentation somewhere, and it customizable 
via the -B option)

One note here for ZFS users:

On ZFS (or any other COW filesystem), rsync unfortunately does NOT do 
the Right Thing when syncing an existing file.  From ZFS's standpoint, 
the most efficient way would be merely to rewrite the changed blocks, 
thus allowing COW and snapshots to make a fully efficient storage of the 
changed file.

Unfortunately, rsync instead writes the ENTIRE file to an temp file ( 
.blahtmpfoosomethingorother ) in the same directory as the changed file, 
writes the changed blocks in that copy, then unlinks the original file 
and changes the name to the temp file to the original one.

This results in about worst-case space usage.  I have this problem with 
storing backups of mbox files (don't ask) - I have large files which 
change frequently, but less than 10% of the file actually changes 
daily.  Due to the way rsync works, ZFS snapshots don't help me on 
replicated data, so I end up having to restore the entire file every time.

I _really_ wish rsync had an option to copy in place or something like 
that, where the updates are made directly to the file, rather than a 
temp copy.




-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Bob Friesenhahn
On Mon, 24 Nov 2008, Erik Trimble wrote:

 One note here for ZFS users:

 On ZFS (or any other COW filesystem), rsync unfortunately does NOT do the 
 Right Thing when syncing an existing file.  From ZFS's standpoint, the most 
 efficient way would be merely to rewrite the changed blocks, thus allowing 
 COW and snapshots to make a fully efficient storage of the changed file.

Bummer. In that case, someone should file a bug in rsync's bug tracker 
(same one as used by Samba) to offer a better (direct overwrite) 
mode for ZFS.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Albert Chin
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
 I _really_ wish rsync had an option to copy in place or something like 
 that, where the updates are made directly to the file, rather than a 
 temp copy.

Isn't this what --inplace does?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-11-24 Thread Al Tobey
Rsync can update in-place.   From rsync(1):
--inplace   update destination files in-place
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-11-24 Thread Erik Trimble
Al Tobey wrote:
 Rsync can update in-place.   From rsync(1):
 --inplace   update destination files in-place
   
Whee!  This is now newly working (for me).  I've been using an older 
rsync, where this option didn't work properly on ZFS.

It looks like this was fixed on newer rsync releases.

--inplace does indeed work correctly, at least in the 3.0.4 version I 
just tested on Cygwin.

I'm going to test the 2.6.9 rsync on a Nevada machine right now.

(ok, tested it).  2.6.9 works as expected with --inplace   I suspect 
that the fix in 2.6.4 to --inplace also made it work with ZFS.

Yipee!

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread BJ Quinn
We're considering using an OpenSolaris server as a backup server.  Some of the 
servers to be backed up would be Linux and Windows servers, and potentially 
Windows desktops as well.  What I had imagined was that we could copy files 
over to the ZFS-based server nightly, take a snapshot, and only the blocks that 
had changed of the files that were being copied over would be stored on disk.

What I found was that you can take a snapshot, make a small change to a large 
file on a ZFS filesystem, take another snapshot, and you'll only store a few 
blocks extra.  However, if you copy the same file of the same name from another 
source to the ZFS filesystem, it doesn't conserve any blocks.  To a certain 
extent, I understand why - when copying a file from another system (even if 
it's the same file or a slightly changed version of the same file), the 
filesystem actually does write to every block of the file, which I guess marks 
all those blocks as changed.

Is there any way to have ZFS check to realize that in fact the blocks being 
copied from another system aren't different, or that only a few of the blocks 
are different?  Perhaps there's another way to copy the file across the network 
that only copies the changed blocks.  I believe rsync can do this, but some of 
the servers in question are Windows servers and rsync/cygwin might not be an 
option.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread BJ Quinn
Thank you both for your responses.  Let me see if I understand correctly - 

1.  Dedup is what I really want, but it's not implemented yet.

2.  The only other way to accomplish this sort of thing is rsync (in other 
words, don't overwrite the block in the first place if it's not different), and 
if I'm on Windows, I'll just have to go ahead and install rsync on my Windows 
boxes if I want it to work correctly.

Wmurnane, you mentioned there was a Windows-based rsync daemon.  Did you mean 
one other than the cygwin-based version?  I didn't know of any native Windows 
rsync software.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread Will Murnane
On Mon, Nov 17, 2008 at 20:54, BJ Quinn [EMAIL PROTECTED] wrote:
 1.  Dedup is what I really want, but it's not implemented yet.
Yes, as I read it.  greenBytes [1] claims to have dedup on their
system; you might investigate them if you decide rsync won't work for
your application.

 2.  The only other way to accomplish this sort of thing is rsync (in other 
 words, don't overwrite the block in the first place if it's not different), 
 and if I'm on Windows, I'll just have to go ahead and install rsync on my 
 Windows boxes if I want it to work correctly.
I believe so, yes.  Other programs may have the same capability, but
rsync by any other name would smell as sweet.

 Wmurnane, you mentioned there was a Windows-based rsync daemon.  Did you mean 
 one other than the cygwin-based version?  I didn't know of any native Windows 
 rsync software.
The link I gave ([2]) contains a version of rsync which is
``self-contained''---it does use Cygwin libraries, but it includes its
own copies of the ones it needs.  It's also nicely integrated with the
Windows management tools, in that it uses a Windows service and
Windows scheduled tasks to do its job rather than re-inventing
circular rolling things everywhere.

Will

[1]: http://www.green-bytes.com/
[2]: http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread Tim
On Mon, Nov 17, 2008 at 3:33 PM, Will Murnane [EMAIL PROTECTED]wrote:

 On Mon, Nov 17, 2008 at 20:54, BJ Quinn [EMAIL PROTECTED] wrote:
  1.  Dedup is what I really want, but it's not implemented yet.
 Yes, as I read it.  greenBytes [1] claims to have dedup on their
 system; you might investigate them if you decide rsync won't work for
 your application.

  2.  The only other way to accomplish this sort of thing is rsync (in
 other words, don't overwrite the block in the first place if it's not
 different), and if I'm on Windows, I'll just have to go ahead and install
 rsync on my Windows boxes if I want it to work correctly.
 I believe so, yes.  Other programs may have the same capability, but
 rsync by any other name would smell as sweet.

  Wmurnane, you mentioned there was a Windows-based rsync daemon.  Did you
 mean one other than the cygwin-based version?  I didn't know of any native
 Windows rsync software.
 The link I gave ([2]) contains a version of rsync which is
 ``self-contained''---it does use Cygwin libraries, but it includes its
 own copies of the ones it needs.  It's also nicely integrated with the
 Windows management tools, in that it uses a Windows service and
 Windows scheduled tasks to do its job rather than re-inventing
 circular rolling things everywhere.



Rsync:
http://www.nexenta.com/corp/index.php?option=com_contenttask=viewid=64Itemid=85
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-07-07 Thread -=dave
agreed.

while a bitwise check is the only assured way to determine duplicative nature 
of two blocks, if the check were done in a streaming method as you suggest, 
performance, while a huge impact compared to not, would be more than bearable 
if used within an environment with large known levels of duplicative data, such 
as a large central backup zfs send target.   the checksum metadata is sent 
first, then the data, while the receiving system is checking it's db for 
possible dupe and if found, reads the data from local disks and compares to 
data as it is coming from sender.  If it gets to the end and hasn't found a 
difference, updates the pointer for the block to point to the duplicate.  This 
won't save any bandwidth during the backup, but will save on-disk space and 
given the application, could be very advantagous.

thank you for the insightful discussion on this.   within the electronic 
discovery and records and information management space data deduplication and 
policy-based aging are the foremost topics of the day but this is at the file 
level while block-level deduplication would lend no benefit to that regardless.

-=dave
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-07-07 Thread -=dave
one other thing... the checksums for all files to send *could* be checked first 
in batch and known unique blocks prioritized and sent first, then the possibly 
duplicative data sent afterwards to be verified a dupe, thereby decreasing the 
possible data loss for the backup window to levels equivolently low to the 
checksum collision probability.

-=dave
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-07-02 Thread J. David Beutel
Mattias Pantzare wrote:
 For this application (deduplication data) the likelihood of matching
 hashes are very high. In fact it has to be, otherwise there would not
 be any data to deduplicate.

 In the cp example, all writes would have matching hashes and all need 
 a verify. 

Would the read for verifying a matching hash take much longer than 
writing duplicate data?  Wouldn't the significant overhead be in 
managing hashes and searching for matches, not in verifying matches?  
However, this could be optimized for the cp example by keeping a cache 
of the hashes of data that was recently read, or even caching the data 
itself so that verification requires no duplicate read.  A disk-wide 
search for independent duplicate data would be a different process, though.

Cheers,
11011011
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-30 Thread Mattias Pantzare

2007/6/25, [EMAIL PROTECTED] [EMAIL PROTECTED]:


I wouldn't de-duplicate without actually verifying that two blocks were
actually bitwise identical.

Absolutely not, indeed.

But the nice property of hashes is that if the hashes don't match then
the inputs do not either.

I.e., the likelyhood of having to do a full bitwise compare is vanishingly
small; the likelyhood of it returning equal is high.


For this application (deduplication data) the likelihood of matching
hashes are very high. In fact it has to be, otherwise there would not
be any data to deduplicate.

In the cp example, all writes would have matching hashes and all need a verify.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Bill Sommerfeld
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
 The most common non-proprietary hash calc for file-level deduplication seems 
 to be the combination of the SHA1 and MD5 together.  Collisions have been 
 shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but 
 the probibility of collitions occuring simultaneously both is to small as 
 the capacity of ZFS is to large :)

No.  Collisions in *any* hash function with output smaller than input
are known to exist through information theory.  The tricky part is
finding the collisions without needing to resort to brute force search.

Last I checked, the cryptographers specializing in hash functions are
much less optimistic than this.  

I wouldn't de-duplicate without actually verifying that two blocks were
actually bitwise identical.  

 
 While computationally intense, this would be a VERY welcome feature addition 
 to ZFS and given the existing infrastructure within the filesystem already, 
 while non-trivial by any means, it seems a prime candidate.  I am not a 
 programmer so I do not have the expertise to spearhead such a movement but I 
 would think getting at least a placeholder Goals and Objectives page into 
 the OZFS community pages would be a good start even if movement on this 
 doesn't come for a year or more.
 
 Thoughts ?
 
 -=dave
 
 - Original Message - 
 From: Gary Mills [EMAIL PROTECTED]
 To: Erik Trimble [EMAIL PROTECTED]
 Cc: Matthew Ahrens [EMAIL PROTECTED]; roland [EMAIL PROTECTED]; 
 zfs-discuss@opensolaris.org
 Sent: Sunday, June 24, 2007 3:58 PM
 Subject: Re: [zfs-discuss] zfs space efficiency
 
 
  On Sun, Jun 24, 2007 at 03:39:40PM -0700, Erik Trimble wrote:
  Matthew Ahrens wrote:
  Will Murnane wrote:
  On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
  Now, wouldn't it be nice to have syscalls which would implement cp
  and
  mv, thus abstracting it away from the userland app?
 
  A copyfile primitive would be great!  It would solve the problem of
  having all those friends to deal with -- stat(), extended
  attributes, UFS ACLs, NFSv4 ACLs, CIFS attributes, etc.  That isn't to
  say that it would have to be implemented in the kernel; it could
  easily be a library function.
  
  I'm with Matt.  Having a copyfile library/sys call would be of
  significant advantage.  In this case, we can't currently take advantage
  of the CoW ability of ZFS when doing 'cp A B'  (as has been pointed out
  to me).  'cp' simply opens file A with read(), opens a new file B with
  write(), and then shuffles the data between the two.  Now, if we had a
  copyfile(A,B) primitive, then the 'cp' binary would simply call this
  function, and, depending on the underlying FS, it would get implemented
  differently.  In UFS, it would work as it does now. For ZFS, it would
  work like a snapshot, where file A and B share data blocks (at least
  until someone starts to update either A or B).
 
  Isn't this technique an instance of `deduplication', which seems to be
  a hot idea in storage these days?  I wonder if it could be done
  automatically, behind the scenes, in some fashion.
 
  -- 
  -Gary Mills--Unix Support--U of M Academic Computing and 
  Networking-
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Casper . Dik

I wouldn't de-duplicate without actually verifying that two blocks were
actually bitwise identical.  

Absolutely not, indeed.

But the nice property of hashes is that if the hashes don't match then
the inputs do not either.

I.e., the likelyhood of having to do a full bitwise compare is vanishingly
small; the likelyhood of it returning equal is high.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Bill Sommerfeld
[This is version 2.  the first one escaped early by mistake]

On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
 The most common non-proprietary hash calc for file-level deduplication seems 
 to be the combination of the SHA1 and MD5 together.  Collisions have been 
 shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but 
 the probibility of collitions occuring simultaneously both is to small as 
 the capacity of ZFS is to large :)

No.  Collisions in *any* hash function with output smaller than input
are known to exist through information theory (you can't put kilobytes
of information into a 128 or 160 bit bucket) The tricky part lies in
finding collisions faster than a brute force search would find them.

Last I checked, the cryptographers specializing in hash functions were
pessimistic; the breakthroughs in collision-finding by Wang  crew a
couple years ago had revealed how little the experts actually knew about
building collision-resistant hash functions; the advice to those of us
who have come to rely on that hash function property was to migrate now
to sha256/sha512 (notably, ZFS uses sha256, not sha1), and then migrate
again once the cryptographers felt they had a better grip on the
problem; the fear was that the newly discovered attacks would generalize
to sha256.

But there's another way -- design the system so correct behavior doesn't
rely on collisions being impossible to find.

I wouldn't de-duplicate without actually verifying that two blocks or
files were actually bitwise identical; if you do this, the
collision-resistance of the hash function becomes far less important to
correctness.  

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Erik Trimble

Bill Sommerfeld wrote:

[This is version 2.  the first one escaped early by mistake]

On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
  
The most common non-proprietary hash calc for file-level deduplication seems 
to be the combination of the SHA1 and MD5 together.  Collisions have been 
shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but 
the probibility of collitions occuring simultaneously both is to small as 
the capacity of ZFS is to large :)



No.  Collisions in *any* hash function with output smaller than input
are known to exist through information theory (you can't put kilobytes
of information into a 128 or 160 bit bucket) The tricky part lies in
finding collisions faster than a brute force search would find them.

Last I checked, the cryptographers specializing in hash functions were
pessimistic; the breakthroughs in collision-finding by Wang  crew a
couple years ago had revealed how little the experts actually knew about
building collision-resistant hash functions; the advice to those of us
who have come to rely on that hash function property was to migrate now
to sha256/sha512 (notably, ZFS uses sha256, not sha1), and then migrate
again once the cryptographers felt they had a better grip on the
problem; the fear was that the newly discovered attacks would generalize
to sha256.

But there's another way -- design the system so correct behavior doesn't
rely on collisions being impossible to find.

I wouldn't de-duplicate without actually verifying that two blocks or
files were actually bitwise identical; if you do this, the
collision-resistance of the hash function becomes far less important to
correctness.  


- Bill
  
I'm assuming the de-duplication scheme would be run in a similar manner 
as scrub currently is under ZFS.  That is, infrequently, batched, and 
interruptible. :-)


Long before we look at deduplication, I'd vote for being able to 
optimize the low-hanging fruit of the instance we KNOW that two files 
are identical (i.e. on copying).


Oh, and last I looked, there was no consensus that there wouldn't be 
considerable overlap between collision-causing files and MD5 checksums 
and SHA1 checksums.  That is, there is no confidence that those datasets 
which cause collision under MD5 will not cause collision under SHA.  
They might, they might not, but it's kinda like the P-NP problem right 
now (as to determining the scope of the overlap).  So don't make any 
assumptions about the validity of using two different checksum 
algorithms.  I think (as Casper said), that should you need to, use SHA 
to weed out the cases where the checksums are different (since, that 
definitively indicates they are different), then do a bitwise compare on 
any that produce the same checksum, to see if they really are the same file.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Frank Cusack

On June 25, 2007 1:02:38 PM -0700 Erik Trimble [EMAIL PROTECTED] wrote:

algorithms.  I think (as Casper said), that should you need to, use SHA
to weed out the cases where the checksums are different (since, that
definitively indicates they are different), then do a bitwise compare on
any that produce the same checksum, to see if they really are the same
file.


Additionally, when you start the comparison, you are likely to find out
*right away* (within the first few bytes of the first block) that the
files are different.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread Matthew Ahrens

Will Murnane wrote:

On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:

Now, wouldn't it be nice to have syscalls which would implement cp and
mv, thus abstracting it away from the userland app?



Not really.  Different apps want different behavior in their copying,
so you'd have to expose a whole lot of things - how much of the copy
has completed? how fast is it going? - even if they never get used by
the userspace app.  And it duplicates functionality - you can do
everything necessary in userspace with stat(), read(), write() and
friends.


A copyfile primitive would be great!  It would solve the problem of having 
all those friends to deal with -- stat(), extended attributes, UFS ACLs, 
NFSv4 ACLs, CIFS attributes, etc.  That isn't to say that it would have to be 
implemented in the kernel; it could easily be a library function.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread Erik Trimble

Matthew Ahrens wrote:

Will Murnane wrote:

On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
Now, wouldn't it be nice to have syscalls which would implement cp 
and

mv, thus abstracting it away from the userland app?



Not really.  Different apps want different behavior in their copying,
so you'd have to expose a whole lot of things - how much of the copy
has completed? how fast is it going? - even if they never get used by
the userspace app.  And it duplicates functionality - you can do
everything necessary in userspace with stat(), read(), write() and
friends.


A copyfile primitive would be great!  It would solve the problem of 
having all those friends to deal with -- stat(), extended 
attributes, UFS ACLs, NFSv4 ACLs, CIFS attributes, etc.  That isn't to 
say that it would have to be implemented in the kernel; it could 
easily be a library function.


--matt
I'm with Matt.  Having a copyfile library/sys call would be of 
significant advantage.  In this case, we can't currently take advantage 
of the CoW ability of ZFS when doing 'cp A B'  (as has been pointed out 
to me).  'cp' simply opens file A with read(), opens a new file B with 
write(), and then shuffles the data between the two.  Now, if we had a 
copyfile(A,B) primitive, then the 'cp' binary would simply call this 
function, and, depending on the underlying FS, it would get implemented 
differently.  In UFS, it would work as it does now. For ZFS, it would 
work like a snapshot, where file A and B share data blocks (at least 
until someone starts to update either A or B).


There are other instances of the various POSIX calls which make it hard 
to take advantage of modern advanced filesystems, while still retaining 
backwards compatibility.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread Gary Mills
On Sun, Jun 24, 2007 at 03:39:40PM -0700, Erik Trimble wrote:
 Matthew Ahrens wrote:
 Will Murnane wrote:
 On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
 Now, wouldn't it be nice to have syscalls which would implement cp 
 and
 mv, thus abstracting it away from the userland app?

 A copyfile primitive would be great!  It would solve the problem of 
 having all those friends to deal with -- stat(), extended 
 attributes, UFS ACLs, NFSv4 ACLs, CIFS attributes, etc.  That isn't to 
 say that it would have to be implemented in the kernel; it could 
 easily be a library function.
 
 I'm with Matt.  Having a copyfile library/sys call would be of 
 significant advantage.  In this case, we can't currently take advantage 
 of the CoW ability of ZFS when doing 'cp A B'  (as has been pointed out 
 to me).  'cp' simply opens file A with read(), opens a new file B with 
 write(), and then shuffles the data between the two.  Now, if we had a 
 copyfile(A,B) primitive, then the 'cp' binary would simply call this 
 function, and, depending on the underlying FS, it would get implemented 
 differently.  In UFS, it would work as it does now. For ZFS, it would 
 work like a snapshot, where file A and B share data blocks (at least 
 until someone starts to update either A or B).

Isn't this technique an instance of `deduplication', which seems to be
a hot idea in storage these days?  I wonder if it could be done
automatically, behind the scenes, in some fashion.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread dave johnson
How other storage systems do it is by calculating a hash value for said file 
(or block), storing that value in a db, then checking every new file (or 
block) commit against the db for a match and if found, replace file (or 
block) with duplicate entry in db.


The most common non-proprietary hash calc for file-level deduplication seems 
to be the combination of the SHA1 and MD5 together.  Collisions have been 
shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but 
the probibility of collitions occuring simultaneously both is to small as 
the capacity of ZFS is to large :)


While computationally intense, this would be a VERY welcome feature addition 
to ZFS and given the existing infrastructure within the filesystem already, 
while non-trivial by any means, it seems a prime candidate.  I am not a 
programmer so I do not have the expertise to spearhead such a movement but I 
would think getting at least a placeholder Goals and Objectives page into 
the OZFS community pages would be a good start even if movement on this 
doesn't come for a year or more.


Thoughts ?

-=dave

- Original Message - 
From: Gary Mills [EMAIL PROTECTED]

To: Erik Trimble [EMAIL PROTECTED]
Cc: Matthew Ahrens [EMAIL PROTECTED]; roland [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org

Sent: Sunday, June 24, 2007 3:58 PM
Subject: Re: [zfs-discuss] zfs space efficiency



On Sun, Jun 24, 2007 at 03:39:40PM -0700, Erik Trimble wrote:

Matthew Ahrens wrote:
Will Murnane wrote:
On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
Now, wouldn't it be nice to have syscalls which would implement cp
and
mv, thus abstracting it away from the userland app?



A copyfile primitive would be great!  It would solve the problem of
having all those friends to deal with -- stat(), extended
attributes, UFS ACLs, NFSv4 ACLs, CIFS attributes, etc.  That isn't to
say that it would have to be implemented in the kernel; it could
easily be a library function.

I'm with Matt.  Having a copyfile library/sys call would be of
significant advantage.  In this case, we can't currently take advantage
of the CoW ability of ZFS when doing 'cp A B'  (as has been pointed out
to me).  'cp' simply opens file A with read(), opens a new file B with
write(), and then shuffles the data between the two.  Now, if we had a
copyfile(A,B) primitive, then the 'cp' binary would simply call this
function, and, depending on the underlying FS, it would get implemented
differently.  In UFS, it would work as it does now. For ZFS, it would
work like a snapshot, where file A and B share data blocks (at least
until someone starts to update either A or B).


Isn't this technique an instance of `deduplication', which seems to be
a hot idea in storage these days?  I wonder if it could be done
automatically, behind the scenes, in some fashion.

--
-Gary Mills--Unix Support--U of M Academic Computing and 
Networking-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread Torrey McMahon
The interesting collision is going to be file system level encryption 
vs. de-duplication as the former makes the latter pretty difficult.


dave johnson wrote:
How other storage systems do it is by calculating a hash value for 
said file (or block), storing that value in a db, then checking every 
new file (or block) commit against the db for a match and if found, 
replace file (or block) with duplicate entry in db.


The most common non-proprietary hash calc for file-level deduplication 
seems to be the combination of the SHA1 and MD5 together.  Collisions 
have been shown to exist in MD5 and theoried to exist in SHA1 by 
extrapolation, but the probibility of collitions occuring 
simultaneously both is to small as the capacity of ZFS is to large :)


While computationally intense, this would be a VERY welcome feature 
addition to ZFS and given the existing infrastructure within the 
filesystem already, while non-trivial by any means, it seems a prime 
candidate.  I am not a programmer so I do not have the expertise to 
spearhead such a movement but I would think getting at least a 
placeholder Goals and Objectives page into the OZFS community pages 
would be a good start even if movement on this doesn't come for a year 
or more.


Thoughts ?

-=dave

- Original Message - From: Gary Mills [EMAIL PROTECTED]
To: Erik Trimble [EMAIL PROTECTED]
Cc: Matthew Ahrens [EMAIL PROTECTED]; roland 
[EMAIL PROTECTED]; zfs-discuss@opensolaris.org

Sent: Sunday, June 24, 2007 3:58 PM
Subject: Re: [zfs-discuss] zfs space efficiency



On Sun, Jun 24, 2007 at 03:39:40PM -0700, Erik Trimble wrote:

Matthew Ahrens wrote:
Will Murnane wrote:
On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
Now, wouldn't it be nice to have syscalls which would implement cp
and
mv, thus abstracting it away from the userland app?



A copyfile primitive would be great!  It would solve the problem of
having all those friends to deal with -- stat(), extended
attributes, UFS ACLs, NFSv4 ACLs, CIFS attributes, etc.  That isn't to
say that it would have to be implemented in the kernel; it could
easily be a library function.

I'm with Matt.  Having a copyfile library/sys call would be of
significant advantage.  In this case, we can't currently take advantage
of the CoW ability of ZFS when doing 'cp A B'  (as has been pointed out
to me).  'cp' simply opens file A with read(), opens a new file B with
write(), and then shuffles the data between the two.  Now, if we had a
copyfile(A,B) primitive, then the 'cp' binary would simply call this
function, and, depending on the underlying FS, it would get implemented
differently.  In UFS, it would work as it does now. For ZFS, it would
work like a snapshot, where file A and B share data blocks (at least
until someone starts to update either A or B).


Isn't this technique an instance of `deduplication', which seems to be
a hot idea in storage these days?  I wonder if it could be done
automatically, behind the scenes, in some fashion.

--
-Gary Mills--Unix Support--U of M Academic Computing and 
Networking-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs space efficiency

2007-06-23 Thread roland
hello !

i think of using zfs for backup purpose of large binary data files (i.e. vmware 
vm`s, oracle database) and want to rsync them in regular interval from other 
systems to one central zfs system with compression on.

i`d like to have historical versions and thus want to make a snapshot before 
each backup - i.e. rsync.

now i wonder:
if i have one large datafile on zfs, make a snapshot from that zfs fs holding 
it and then overwrting that file by a newer version with slight differences 
inside - what about the real disk consumption on the zfs side ?
do i need to handle this a special way to make it space-efficient ? do i need 
to use rsync --inplace ?

typically , rsync writes a complete new (temporary) file based on the existing 
one and on what has change at the remote site - and then replacing the old one 
by the new one via delete/rename. i assume this will eat up my backup space 
very quickly, even when using snapshots and even if only small parts of the 
large file are changing.

comments? 

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Matthew Ahrens

Erik Trimble wrote:

roland wrote:

hello !

i think of using zfs for backup purpose of large binary data files 
(i.e. vmware vm`s, oracle database) and want to rsync them in regular 
interval from other systems to one central zfs system with compression 
on.


i`d like to have historical versions and thus want to make a snapshot 
before each backup - i.e. rsync.


now i wonder:
if i have one large datafile on zfs, make a snapshot from that zfs fs 
holding it and then overwrting that file by a newer version with 
slight differences inside - what about the real disk consumption on 
the zfs side ?
do i need to handle this a special way to make it space-efficient ? do 
i need to use rsync --inplace ?


typically , rsync writes a complete new (temporary) file based on the 
existing one and on what has change at the remote site - and then 
replacing the old one by the new one via delete/rename. i assume this 
will eat up my backup space very quickly, even when using snapshots 
and even if only small parts of the large file are changing.


You are correct, when you write a new file, we will allocate space for that 
entire new file, even if some of its blocks happen to have the same content 
as blocks in the previous file.


This is one of the reasons that we implemented zfs send.  If only a few 
blocks of a large file were modified on the sending side, then only those 
blocks will be sent, and we will find the blocks extremely quickly (in 
O(modified blocks) time; using the POSIX interfaces (as rsync does) would 
take O(filesize) time).  Of course, if the system you're backing up from is 
not running ZFS, this does not help you.


Under ZFS, any equivalent to 'cp A B' takes up no extra space. The 
metadata is updated so that B points to the blocks in A.  Should anyone 
begin writing to B, only the updated blocks are added on disk, with the 
metadata for B now containing the proper block list to be used (some 
from A, and the new blocks in B).   So, in your case, you get maximum 
space efficiency, where only the new blocks are stored, and the old 
blocks simply are referenced.


That is not correct; what lead you to believe that?  With ZFS (and UFS, EXT2, 
WAFL, VxFS, etc), cp a b will copy the contents of the file, resulting in 
two copies stored on disk.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Erik Trimble

roland wrote:

hello !

i think of using zfs for backup purpose of large binary data files (i.e. vmware 
vm`s, oracle database) and want to rsync them in regular interval from other 
systems to one central zfs system with compression on.

i`d like to have historical versions and thus want to make a snapshot before 
each backup - i.e. rsync.

now i wonder:
if i have one large datafile on zfs, make a snapshot from that zfs fs holding 
it and then overwrting that file by a newer version with slight differences 
inside - what about the real disk consumption on the zfs side ?
do i need to handle this a special way to make it space-efficient ? do i need 
to use rsync --inplace ?

typically , rsync writes a complete new (temporary) file based on the existing 
one and on what has change at the remote site - and then replacing the old one 
by the new one via delete/rename. i assume this will eat up my backup space 
very quickly, even when using snapshots and even if only small parts of the 
large file are changing.

comments? 


regards
roland
 
  
I'm pretty sure about this answer, but others should correct me if I'm 
wrong. :-)


Under ZFS, any equivalent to 'cp A B' takes up no extra space. The 
metadata is updated so that B points to the blocks in A.  Should anyone 
begin writing to B, only the updated blocks are added on disk, with the 
metadata for B now containing the proper block list to be used (some 
from A, and the new blocks in B).   So, in your case, you get maximum 
space efficiency, where only the new blocks are stored, and the old 
blocks simply are referenced.


What I'm not sure of is exactly how ZFS does this.  Does the metadata 
for B contain an entire list of all blocks (in order) for that file? Or 
does each block effectively contain a pointer to the next (and 
possibly prev) block, in effect a doubly-linked list?   I'd hope for 
the former, since that seems most efficient.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Darren Dunham
 if i have one large datafile on zfs, make a snapshot from that zfs fs
 holding it and then overwrting that file by a newer version with
 slight differences inside - what about the real disk consumption on
 the zfs side ?

If all the blocks are rewritten, then they're all new blocks as far as
ZFS knows.

 do i need to handle this a special way to make it
 space-efficient ? do i need to use rsync --inplace ?

I would certainly try that to see if it worked, and if your access can
cope with files being partially edited at times.

 typically , rsync writes a complete new (temporary) file based on the
 existing one and on what has change at the remote site - and then
 replacing the old one by the new one via delete/rename. i assume this
 will eat up my backup space very quickly, even when using snapshots
 and even if only small parts of the large file are changing.

Yes, I think so.

I believe this is even more of a problem for a server with Windows
clients (via CIFS) because many of the apps tend to rewrite the entire
file on save.  Network Appliance eventually added an option on their
software to let you do additional work and save space if files are
substantially similar to the last snapshot.

Theirs works on file close, so it's only a CIFS option.  ZFS could
conceivably do the same for local access as well, but I don't think
anyone's tried to work on it yet.


-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Erik Trimble

Matthew Ahrens wrote:

Erik Trimble wrote:
Under ZFS, any equivalent to 'cp A B' takes up no extra space. The 
metadata is updated so that B points to the blocks in A.  Should 
anyone begin writing to B, only the updated blocks are added on disk, 
with the metadata for B now containing the proper block list to be 
used (some from A, and the new blocks in B).   So, in your case, you 
get maximum space efficiency, where only the new blocks are stored, 
and the old blocks simply are referenced.


That is not correct; what lead you to believe that?  With ZFS (and 
UFS, EXT2, WAFL, VxFS, etc), cp a b will copy the contents of the 
file, resulting in two copies stored on disk.


--matt
Basically, the descriptions of Copy on Write.  Or does this apply only 
to Snapshots?   My original understanding was that CoW applied whenever 
you were making a duplicate of an existing file.  I can understand that 
'cp' might not do that (given that there must be some (system-call) 
mechanism for ZFS to distinguish that we are replicating an existing 
file, not just creating a whole new one).   Now that I think about it, 
I'm not sure that I can see any way to change the behavior of POSIX 
calls to allow for this type of mechanism. You'd effectively have to 
create a whole new system call with multiple file arguments.  sigh


Wishfull thinking, I guess.


Now, wouldn't it be nice to have syscalls which would implement cp and 
mv, thus abstracting it away from the userland app?


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Will Murnane

On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:

Matthew Ahrens wrote:
Basically, the descriptions of Copy on Write.  Or does this apply only
to Snapshots?   My original understanding was that CoW applied whenever
you were making a duplicate of an existing file.

CoW happens all the time.  If you overwrite a file, instead of writing
it to the same location on disk, ZFS allocates a new block, writes to
that, and then creates a new tree in parallel (all on new, previously
unused blocks).  Then it changes the root of the tree to point to the
newly allocated blocks.


Now that I think about it,
I'm not sure that I can see any way to change the behavior of POSIX
calls to allow for this type of mechanism. You'd effectively have to
create a whole new system call with multiple file arguments.  sigh

Files that are mostly the same, or exactly the same?  If they're
exactly the same, it's called a hardlink ;)  If they're mostly the
same, I guess, you could come up with a combination of a sparse file
and a symlink.  But I don't think the needed functionality is commonly
enough used to bother implementing in kernel space.  If you really
want it in your application, do it yourself.  Make a little file with
two filenames, and a bitmap indicating which of them the application
blocks should come from.


Now, wouldn't it be nice to have syscalls which would implement cp and
mv, thus abstracting it away from the userland app?

Not really.  Different apps want different behavior in their copying,
so you'd have to expose a whole lot of things - how much of the copy
has completed? how fast is it going? - even if they never get used by
the userspace app.  And it duplicates functionality - you can do
everything necessary in userspace with stat(), read(), write() and
friends.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss