On Thu, 06 Jan 2011 22:42:15 PST Michael DeMan sola...@deman.com wrote:
To be quite honest, I too am skeptical about about using de-dupe just based o
n SHA256. In prior posts it was asked that the potential adopter of the tech
nology provide the mathematical reason to NOT use SHA-256 only.
On Mon, 20 Dec 2010 11:27:41 PST Erik Trimble erik.trim...@oracle.com wrote:
The problem boils down to this:
When ZFS does a resilver, it walks the METADATA tree to determine what
order to rebuild things from. That means, it resilvers the very first
slab ever written, then the next
The 45 byte score is the checksum of the top of the tree, isn't that
right?
Yes. Plus an optional label.
ZFS snapshots and clones save a lot of space, but the
'content-hash == address' trick means you could potentially save
much more.
Especially if you carry around large files (disk
I have budget constraints then I can use only user-level storage.
until I discovered zfs I used subversion and git, but none of them is designe
d to manage gigabytes of data, some to be versioned, some to be unversioned.
I can't afford silent data corruption and, if the final response is
Robert Milkowski wrote:
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG I've read that it's supposed to go at full speed, i.e. as fast as
MG possible. I'm doing a disk replace and what zpool reports kind of
MG surprises me. The resilver goes on at 1.6MB/s. Did
Pawel Jakub Dawidek wrote:
This is what I see on Solaris (hole is 4GB):
# /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
real 23.7
# /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
real 21.2
# /usr/bin/time dd if=/ufs/hole of=/dev/null
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid
says this problem also exists on Solaris hence this email.]
Summary: on ZFS, overhead for reading a hole seems far worse
than actual reading from a disk. Small buffers are used to
make this overhead more visible.
I ran the following