Short answer: No.
Long answer: Not without rewriting the previously written data. Data
is being striped over all of the top level VDEVs, or at least it should
be. But there is no way, at least not built into ZFS, to re-allocate the
storage to perform I/O balancing. You would basically have to
at 22:48, Eduardo Bragatto edua...@bragatto.com wrote:
On Aug 3, 2010, at 10:08 PM, Khyron wrote:
Long answer: Not without rewriting the previously written data. Data
is being striped over all of the top level VDEVs, or at least it should
be. But there is no way, at least not built
My inclination, based on what I've read and heard from others, is to say
no.
But again, the best way to find out is to write the code. :\
On Wed, Jun 9, 2010 at 11:45, Edward Ned Harvey solar...@nedharvey.comwrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
To answer the question you asked here...the answer is no. There have been
MANY discussions of this in the past. Here's the lng thread I started
back
in May about backup strategies for ZFS pools and file systems:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038678.html
But
Ian: Of course they expected answers to those questions here. It seems many
people do not read the forums or mailing list archives to see their
questions
previously asked (and answered) many many times over, or the flames that
erupt from them. It's scary how much people don't check historical
A few things come to mind...
1. A lot better than...what? Setting the recordsize to 4K got you some
deduplication but maybe the pertinent question is what were you
expecting?
2. Dedup is fairly new. I haven't seen any reports of experiments like
yours so...CONGRATULATIONS!! You're probably
This is how rumors get started.
From reading that thread, the OP didn't seem to know much of anything
about...
anything. Even less so about Solaris and OpenSolaris. I'd advise not to
get your
news from mailing lists, especially not mailing lists for people who don't
use the
product you're
I have no idea who you're talking to, but presumably you mean this link:
http://lists.freebsd.org/pipermail/freebsd-questions/2010-April/215269.html
Worked fine for me. I didn't post it. I'm not the OP on this thread or on
the FreeBSD thread. So what broken link are you talking about and to
I would advise getting familiar with the basic terminology and vocabulary of
ZFS
first. Start with the Solaris 10 ZFS Administration Guide. It's a bit more
complete
for a newbie.
http://docs.sun.com/app/docs/doc/819-5461?l=en
You can then move on to the Best Practices Guide, Configuration
Response below...
2010/4/5 Andreas Höschler ahoe...@smartsoft.de
Hi Edward,
thanks a lot for your detailed response!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
• I would like to remove the two SSDs as log
Yes, I think Eric is correct.
Funny, this is an adjunct to the thread I started entitled Thoughts on ZFS
Pool
Backup Strategies. I was going to include this point in that thread but
thought
better of it.
It would be nice if there were an easy way to extract a pool configuration,
with
all of the
Heh.
The original definition of I was inexpensive. Was never meant to be
independent.
Guess that changed by vendors. The idea all along was to take inexpensive
hardware
and use software to turn it into a reliable system.
http://portal.acm.org/citation.cfm?id=50214
Responses inline below...
On Sat, Mar 20, 2010 at 00:57, Edward Ned Harvey solar...@nedharvey.comwrote:
1. NDMP for putting zfs send streams on tape over the network. So
Tell me if I missed something here. I don't think I did. I think this
sounds like crazy talk.
I used NDMP up till
Ahhh, this has been...interesting...some real personalities involved in
this
discussion. :p The following is long-ish but I thought a re-cap was in
order.
I'm sure we'll never finish this discussion, but I want to at least have a
new
plateau or base from which to consider these questions.
I've
I'm also a Mac user. I use Mozy instead of DropBox, but it sounds like
DropBox should get a place at the table. I'm about to download it in a few
minutes.
I'm right now re-cloning my internal HD due to some HFS+ weirdness. I
have to completely agree that ZFS would be a great addition to MacOS
Mar 2010, Khyron wrote:
Getting better FireWire performance on OpenSolaris would be nice though.
Darwin drivers are open...hmmm.
OS-X is only (legally) used on Apple hardware. Has anyone considered
that since Firewire is important to Apple, they may have selected a
particular Firewire chip
Responses inline...
On Tue, Mar 16, 2010 at 07:35, Robin Axelsson
gu99r...@student.chalmers.sewrote:
I've been informed that newer versions of ZFS supports the usage of hot
spares which is denoted for drives that are not in use but available for
resynchronization/resilvering should one of the
Erik,
I don't think there was any confusion about the block nature of zfs send
vs. the file nature of star. I think what this discussion is coming down to
is
the best ways to utilize zfs send as a backup, since (as Darren Moffat has
noted) it supports all the ZFS objects and metadata.
I see 2
Exactly!
This is what I meant, at least when it comes to backing up ZFS datasets.
There
are tools available NOW, such as Star, which will backup ZFS datasets due to
the
POSIX nature of those datasets. As well, Amanda, Bacula, NetBackup,
Networker
and probably some others I missed. Re-inventing
To be sure, Ed, I'm not asking:
Why bother trying to backup with zfs send when there are fully supportable
and
working options available right NOW?
Rather, I am asking:
Why do we want to adapt zfs send to do something it was never intended
to do, and probably won't be adapted to do (well, if at
Ugh! I meant that to go to the list, so I'll probably re-send it for the
benefit
of everyone involved in the discussion. There were parts of that that I
wanted
others to read.
From a re-read of Richard's e-mail, maybe he meant that the number of I/Os
queued to a device can be tuned lower and
For those following along, this is the e-mail I meant to send to the list
but
instead sent directly to Tonmaus. My mistake, and I apologize for having to
re-send.
=== Start ===
My understanding, limited though it may be, is that a scrub touches ALL data
that
has been written, including the
Ian,
When you say you spool to tape for off-site archival, what software do you
use?
On Wed, Mar 17, 2010 at 18:53, Ian Collins i...@ianshome.com wrote:
SNIP
I have been using a two stage backup process with my main client,
send/receive to a backup pool and spool to tape for off site
In following this discussion, I get the feeling that you and Richard are
somewhat
talking past each other. He asked you about the hardware you are currently
running
on, whereas you seem to be interested in a model for the impact of scrubbing
on
I/O throughput that you can apply to some
The issue as presented by Tonmaus was that a scrub was negatively impacting
his RAIDZ2 CIFS performance, but he didn't see the same impact with RAIDZ.
I'm not going to say whether that is a problem one way or the other; it
may
be expected behavior under the circumstances. That's for ZFS
I thought pointing out some of this information might come in handy for some
of the
folks who are new to the (Open)Solaris world.
The following section discusses differences between SMI labels (aka VTOC)
and EFI
GPT labels. It may not be everything one needs to know in order to
successfully
I'm imagining that OpenSolaris isn't *too* different from Solaris 10 in this
regard.
I believe Richard Elling recommended cfgadm -v. I'd also suggest
iostat -E, with and without -n for good measure.
So that's iostat -E and iostat -En. As long as you know the physical
drive
specification for
Ugh! If you received a direct response to me instead of via the list,
apologies for
that.
Rob:
I'm just reporting the news. The RFE is out there. Just like SLOGs, I
happen to
think it a good idea, personally, but that's my personal opinion. If it
makes dedup
more usable, I don't see the
The DDT is stored within the pool, IIRC, but there is an RFE open to allow
you to
store it on a separate top level VDEV, like a SLOG.
The other thing I've noticed with all of the destroyed a large dataset with
dedup
enabled and it's taking forever to import/destory/insert function here
questions
Well, it's an attack, right? Neither Skein nor Threefish has been
compromised.
In fact, this is what you want to see - researchers attacking an algorithm
which
goes a long way toward furthering or proving the security of said
algorithm. I
think I agree with Darren overall, but this still looks
I think the point, Chester, which everyone seems to be dancing around
or missing, is that your planning may need to go back to the drawing board
on this one. Absorb the resources out there for how to best configure
your pools and vdevs, *then* implement. That's the most efficient way to go
32 matches
Mail list logo