I got a 750 and sliced it and mirrored the other pieces.
Maybe you ran into a bug, because that situation would not be tested much in
the wild... or maybe you just bad lucked out and your computer toasted some
data.
Thanks Jeff. I hope my frustration in all this doesn't sound directed
at
It solved my problems, the difference was really huge. The onboard realtek
8111B chip could do about 40Mt/sec file transfer over CIFS which looked good,
but in reality the io speed was very bad: backups to the CIFS share took ages,
copying files to using ftp and unzipping files located in the
Hi,
I've got a zpool made up of 2 mirrored vdevs. For one moment i had a cabling
problem and lost all disks... i reconnected and onlined the disks. No
resilvering kicked in, so i tried to force a scrub, but nothing's happening.
I issue the command and it's as if i never did.
Any
IHAC using ZFS in production, and he's opening up some files with the
O_SYNC flag. This affects subsequent write()'s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this,
Patrick Pinchera wrote:
IHAC using ZFS in production, and he's opening up some files with the
O_SYNC flag. This affects subsequent write()'s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically
Hi All ;
Is there any hope for deduplication on ZFS ?
Mertol
http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL
We're running Solaris 10 U5 on lots of Sun SPARC hardware. That's ZFS
version=4. Simple question: how far behind is this version of ZFS as
compared to what is in Nevada? Just point me to the web page, I know it's
out there somewhere.
--
chris -at- microcozm -dot- net
=== Si Hoc Legere Scis
Hi Jeff,
What I'm trying to do is import many copies of a pool that is cloned on a
storage array.
ZFS will only import the first disk (there is only one disk in the pool) and
any clones have the same pool name and GUID and are ignored.
Is there any chance Sun will support external cloned disks
Mertol,
Yes, dedup is certainly on our list and has been actively
discussed recently, so there's hope and some forward progress.
It would be interesting to see where it fits into our customers
priorities for ZFS. We have a long laundry list of projects.
In addition there's bug fixes performance
Hi
S10_u5 has version 4, latest in opensolaris is version 10
see
http://opensolaris.org/os/community/zfs/version/10/
where
n=version
http://opensolaris.org/os/community/zfs/version/n/ so sub 4 for n to see
version 4 changes, and so on up to 10.
run zpool upgrade ( doesn't actually run an
On Wed, Jul 2, 2008 at 2:10 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
How difficult would it be to write some code to change the GUID of a pool?
As a recreational hack, not hard at all. ?But I cannot recommend it
in good conscience, because if the pool contains more than one disk,
the GUID
Matt Harrison schrieb:
James C. McPherson wrote:
Matt Harrison wrote:
I seem to have overlooked the first part of your reply, I can just
replace the disks one at a time, and of course the pool would rebuild
itself onto the new disk. Would this automatically extend the size of
A really smart nexus for dedup is right when archiving takes place. For
systems like EMC Centera, dedup is basically a byproduct of checksumming.
Two files with similar metadata that have the same hash? They're identical.
Charles
On 7/7/08 4:25 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Even better would be using the ZFS block checksums (assuming we are only
summing the data, not it's position or time :)...
Then we could have two files that have 90% the same blocks, and still
get some dedup value... ;)
Nathan.
Charles Soto wrote:
A really smart nexus for dedup is right when
Neil Perrin wrote:
Mertol,
Yes, dedup is certainly on our list and has been actively
discussed recently, so there's hope and some forward progress.
It would be interesting to see where it fits into our customers
priorities for ZFS. We have a long laundry list of projects.
In addition
On Tue, 8 Jul 2008, Nathan Kroenert wrote:
Even better would be using the ZFS block checksums (assuming we are only
summing the data, not it's position or time :)...
Then we could have two files that have 90% the same blocks, and still
get some dedup value... ;)
It seems that the hard
On Mon, 7 Jul 2008, Jonathan Loran wrote:
use ZFS is as nearline storage for backup data. I have a 16TB server
that provides a file store for an EMC Networker server. I'm seeing a
compressratio of 1.73, which is mighty impressive, since we also use
native EMC compression during the backups.
On Mon, Jul 07, 2008 at 07:56:26PM -0500, Bob Friesenhahn wrote:
This deduplication technology seems similar to the Microsoft adds I
see on TV which advertise how their new technology saves the customer
Quantum's claim of 20:1 just doesn't jive in my head, either, for some
reason.
-brian
On Mon, Jul 7, 2008 at 7:40 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
The actual benefit of data deduplication to an enterprise seems
negligible unless the backup system directly supports it. ?In the
enterprise the cost of storage has more to do with backing up the data
than the amount of
I second this, provided we also check that the data is in fact
identical as well. Checksum collisions are likely given the sizes of
disks and the sizes of checksums; and some users actually deliberately
generate data with colliding checksums (researchers and nefarious
users). Dedup must be
Good points. I see the archival process as a good candidate for adding
dedup because it is essentially doing what a stage/release archiving system
already does - faking the existence of data via metadata. Those blocks
aren't actually there, but they're still accessible because they're
On Mon, 7 Jul 2008, Mike Gerdts wrote:
As I have considered deduplication for application data I see several
things happen in various areas.
You have provided an excellent description of gross inefficiencies in
the way systems and software are deployed today, resulting in massive
However, my pool is not behaving well. I have had
insufficient replicas for the pool and corrupted
data for the mirror piece that is on both the USB
drives.
I'm learining about ZFS for the same reason, I want a reliable home server. So
I've been reading the archives. In March 2007 there was
On Mon, Jul 7, 2008 at 9:24 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Mon, 7 Jul 2008, Mike Gerdts wrote:
There tend to be organizational walls between those that manage
storage and those that consume it. As storage is distributed across
a network (NFS, iSCSI, FC) things like
Bohdan Tashchuk wrote:
However, my pool is not behaving well. I have had
insufficient replicas for the pool and corrupted
data for the mirror piece that is on both the USB
drives.
I'm learining about ZFS for the same reason, I want a reliable home server.
So I've been reading the
Oh, I agree. Much of the duplication described is clearly the result of
bad design in many of our systems. After all, most of an OS can be served
off the network (diskless systems etc.). But much of the dupe I'm talking
about is less about not using the most efficient system administration
On Mon, Jul 7, 2008 at 11:07 PM, Charles Soto [EMAIL PROTECTED] wrote:
So, while much of the situation is caused by bad data management, there
aren't always systems we can employ that prevent it. Done right, dedup can
certainly be worth it for my operations. Yes, teaching the user the
right
Does anyone know a tool that can look over a dataset and give
duplication statistics? I'm not looking for something incredibly
efficient but I'd like to know how much it would actually benefit our
dataset: HiRISE has a large set of spacecraft data (images) that could
potentially have large
28 matches
Mail list logo