On Tue, Feb 26, 2013 at 7:42 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Wed, 27 Feb 2013, Ian Collins wrote:
I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without
On Tue, Jan 22, 2013 at 5:29 AM, Darren J Moffat darr...@opensolaris.orgwrote:
Preallocated ZVOLs - for swap/dump.
Darren, good to hear about the cool stuff in S11.
Just to clarify, is this preallocated ZVOL different than the preallocated
dump which has been there for quite some time (and is
Arne, I took a look at far.c in
http://cr.illumos.org/~webrev/sensille/far-send/. Here are some
high-level comments:
Why did you choose to do this all in the kernel? As opposed to the
way zfs diff works, where the kernel generates the list of changed
items and then userland sorts out what
In general, you can force the unmount with the -f flag.
As to your specific question of changing the mountpoint to somewhere that
it can't currently be mounted, it should set the mountpoint property but
not remount it. E.g.:
# zfs set mountpoint=/ rpool/test
cannot mount '/': directory is not
On Thu, Oct 25, 2012 at 2:25 AM, Jim Klimov jimkli...@cos.ru wrote:
Hello all,
I was describing how raidzN works recently, and got myself wondering:
does zpool scrub verify all the parity sectors and the mirror halves?
Yes. The ZIO_FLAG_SCRUB instructs the raidz or mirror vdev to read
On Sat, Oct 20, 2012 at 1:23 AM, Arne Jansen sensi...@gmx.net wrote:
On 10/20/2012 01:21 AM, Matthew Ahrens wrote:
On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
Please don't bother
/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net
mailto:sensi...@gmx.net mailto:sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for
it
can
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command 'zfs fits-send'. The resulting streams can
currently only be received
On Wed, Sep 26, 2012 at 10:28 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
When I create a 50G zvol, it gets volsize 50G, and it gets used and
refreservation 51.6G
** **
I have some filesystems already in use,
On Fri, Sep 21, 2012 at 4:00 AM, Bogdan Ćulibrk b...@default.rs wrote:
Greetings,
I'm trying to achieve selective output of zfs list command for specific
user to show only delegated sets. Anyone knows how to achieve this?
I've checked zfs allow already but it only helps in restricting the
On Fri, Sep 14, 2012 at 11:07 PM, Bill Sommerfeld sommerf...@hamachi.orgwrote:
On 09/14/12 22:39, Edward Ned Harvey
(**opensolarisisdeadlongliveopens**olaris)
wrote:
From:
zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org[mailto:
zfs-discuss-
On Sat, Sep 15, 2012 at 2:07 PM, Dave Pooser dave@alfordmedia.comwrote:
The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
That... doesn't look right. (Comparing zfs list -t snapshot and looking at
the 5.34 ref for the snapshot vs zfs list on the new system and
On Thu, Aug 30, 2012 at 1:11 PM, Timothy Coalson tsc...@mst.edu wrote:
Is there a way to get the total amount of data referenced by a snapshot
that isn't referenced by a specified snapshot/filesystem? I think this is
what is really desired in order to locate snapshots with offending space
On Wed, May 2, 2012 at 3:28 PM, Fred Liu fred_...@issi.com wrote:
The size accounted for by the userused@ and groupused@ properties is the
referenced space, which is used as the basis for many other space
accounting values in ZFS (e.g. du / ls -s / stat(2), and the zfs
accounting
properties
2012/4/25 Richard Elling richard.ell...@gmail.com:
On Apr 25, 2012, at 8:14 AM, Eric Schrock wrote:
ZFS will always track per-user usage information even in the absence of
quotas. See the the zfs 'userused@' properties and 'zfs userspace' command.
tip: zfs get -H -o value -p
On Thu, Jan 12, 2012 at 5:00 PM, Jim Klimov jimkli...@cos.ru wrote:
While reading about zfs on-disk formats, I wondered once again
why is it not possible to create a snapshot on existing data,
not of the current TXG but of some older point-in-time?
It is not possible because the older data
On Thu, Jan 5, 2012 at 7:17 PM, Ivan Rodriguez ivan...@gmail.com wrote:
Dear list,
I'm about to upgrade a zpool from 10 to 29 version, I suppose that
this upgrade will improve several performance issues that are present
on 10, however
inside that pool we have several zfs filesystems all of
On Thu, Jan 5, 2012 at 6:53 AM, sol a...@yahoo.com wrote:
I would have liked to think that there was some good-will between the ex-
and current-members of the zfs team, in the sense that the people who
created zfs but then left Oracle still care about it enough to want the
Oracle version to
On Fri, Jan 13, 2012 at 4:49 PM, Matt Banks mattba...@gmail.com wrote:
I'm sorry to be asking such a basic question that would seem to be easily
found on Google, but after 30 minutes of googling and looking through
this lists' archives, I haven't found a definitive answer.
Is the L2ARC
On Mon, Jan 16, 2012 at 11:34 AM, Jim Klimov jimkli...@cos.ru wrote:
2012-01-16 23:14, Matthew Ahrens пишет:
On Thu, Jan 12, 2012 at 5:00 PM, Jim Klimov jimkli...@cos.ru
mailto:jimkli...@cos.ru wrote:
While reading about zfs on-disk formats, I wondered once again
why
On Mon, Dec 12, 2011 at 11:04 PM, Erik Trimble tr...@netdemons.com wrote:
On 12/12/2011 12:23 PM, Richard Elling wrote:
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
Not exactly. What is dedup'ed is the stream only, which is infect not
very
efficient. Real dedup aware replication is
On Fri, Nov 4, 2011 at 6:49 PM, Ian Collins i...@ianshome.com wrote:
On 11/ 5/11 02:37 PM, Matthew Ahrens wrote:
On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins i...@ianshome.com mailto:
i...@ianshome.com wrote:
I just tried sending from a oi151a system to a Solaris 10 backup
server
On Sat, Oct 29, 2011 at 10:57 AM, Jim Klimov jimkli...@cos.ru wrote:
In short, is it
possible to add restartability to ZFS SEND
In short, yes.
We are working on it here at Delphix, and plan to contribute our changes
upstream to Illumos.
You can read more about it in the slides I link to in
On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins i...@ianshome.com wrote:
I just tried sending from a oi151a system to a Solaris 10 backup server
and the server barfed with
zfs_receive: stream is unsupported version 17
I can't find any documentation linking stream version to release, so does
On Thu, Aug 11, 2011 at 11:14 AM, Test Rat ttse...@gmail.com wrote:
After replicating a pool with zfs send/recv I've found out I cannot
perform some zfs on those datasets anymore. The datasets had permissions
set via `zfs allow'.
...
So, what are permissions if not properties?
Properties
I have implemented a new property for ZFS, refratio, which is the
compression ratio for referenced space (the compressratio is the ratio for
used space). We are using this here at Delphix to figure out how much space
a filesystem would use if it was not compressed (ignoring snapshots). I'd
like
On Mon, Jun 6, 2011 at 7:08 PM, Haudy Kazemi kaze0...@umn.edu wrote:
On 6/6/2011 5:02 PM, Richard Elling wrote:
On Jun 6, 2011, at 2:54 PM, Yuri Pankov wrote:
On Mon, Jun 06, 2011 at 02:19:50PM -0700, Matthew Ahrens wrote:
I have implemented a new property for ZFS, refratio, which
On Sat, Jun 4, 2011 at 12:51 PM, Harry Putnam rea...@newsguy.com wrote:
But I also see a massive list of files with a letter `m' prefixed on
each line, Which is supposed to mean modified, They cannot all really
be modified so I'm thinking its something to do with rsyncing files
from a windows
On Tue, May 31, 2011 at 6:52 AM, Tomas Ögren st...@acc.umu.se wrote:
On a different setup, we have about 750 datasets where we would like to
use a single recursive snapshot, but when doing that all file access
will be frozen for varying amounts of time (sometimes half an hour or
way more).
On Thu, May 12, 2011 at 08:52:04PM +1000, Daniel Carosone wrote:
Other than the initial create, and the most
recent scrub, the history only contains a sequence of auto-snapshot
creations and removals. None of the other commands I'd expect, like
the filesystem creations and recv, the
The community of developers working on ZFS continues to grow, as does
the diversity of companies betting big on ZFS. We wanted a forum for
these developers to coordinate their efforts and exchange ideas. The
ZFS working group was formed to coordinate these development efforts.
The working group
On Wed, May 25, 2011 at 12:55 PM, Deano de...@rattie.demon.co.uk wrote:
snip
Hi Matt,
That's looks really good, I've been meaning to implement a ZFS compressor
(using a two pass, LZ4 + Arithmetic Entropy), so nice to see a route with
which this can be done.
Cool! New compression
On Wed, May 25, 2011 at 3:08 PM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
On 2011-May-26 03:02:04 +0800, Matthew Ahrens mahr...@delphix.com wrote:
Looks good.
Thanks for taking the time to look at this. More comments inline below.
pool open (zpool import and implicit import
On Wed, May 25, 2011 at 2:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I've finally returned to this dedup testing project, trying to get a handle
on why performance is so terrible. At the moment I'm re-running tests and
monitoring memory_throttle_count,
On Wed, May 25, 2011 at 8:01 PM, Matt Weatherford m...@u.washington.eduwrote:
pike# zpool get version internal
NAME PROPERTY VALUESOURCE
internal version 28 default
pike# zpool get version external-J4400-12x1TB
NAME PROPERTY VALUESOURCE
On Thu, Jan 13, 2011 at 4:36 AM, fred f...@mautadine.com wrote:
Thanks for this explanation
So there is no real way to estimate the size of the increment?
Unfortunately not for now.
Anyway, for this particular filesystem, i'll stick with rsync and yes, the
difference was 50G!
Why? I
On Mon, Jan 10, 2011 at 2:40 PM, fred f...@mautadine.com wrote:
Hello,
I'm having a weird issue with my incremental setup.
Here is the filesystem as it shows up with zfs list:
NAME USED AVAIL REFER MOUNTPOINT
Data/FS1 771M 16.1T
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins i...@ianshome.com wrote:
On 12/10/10 12:31 PM, Moazam Raja wrote:
So, is it OK to send/recv while having the receive volume write enabled?
A write can fail if a filesystem is unmounted for update.
True, but ZFS recv will not normally unmount a
usedsnap is the amount of space consumed by all snapshots. Ie, the
amount of space that would be recovered if all snapshots were to be
deleted.
The space used by any one snapshot is the space that would be
recovered if that snapshot was deleted. Ie, the amount of space that
is unique to that
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson don.jack...@gmail.com wrote:
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into
npool/open...@xfer-11292010
received 23.5GB stream in 883 seconds (27.3MB/sec)
I verified that this bug exists in OpenSolaris as well. The problem is that
we can't destroy the old filesystem a (which has been renamed to
rec2/recv-2176-1
in this case). We can't destroy it because it has a child, b. We need to
rename b to be under the new a. However, we are not renaming
I verified that this bug exists in OpenSolaris as well. The problem is that
we can't destroy the old filesystem a (which has been renamed to
rec2/recv-2176-1 in this case). We can't destroy it because it has a
child, b. We need to rename b to be under the new a. However, we are
not renaming
That's correct.
This behavior is because the send|recv operates on the DMU objects,
whereas the recordsize property is interpreted by the ZPL. The ZPL
checks the recordsize property when a file grows. But the recv
doesn't grow any files, it just dumps data into the underlying
objects.
--matt
Jordan Schwartz wrote:
ZFSfolk,
Pardon the slightly offtopic post, but I figured this would be a good
forum to get some feedback.
I am looking at implementing zfs group quotas on some X4540s and
X4140/J4400s, 64GB of RAM per server, running Solaris 10 Update 8
servers with IDR143158-06.
There
Tom Hall wrote:
Re the DDT, can someone outline it's structure please? Some sort of
hash table? The blogs I have read so far dont specify.
It is stored in a ZAP object, which is an extensible hash table. See
zap.[ch], ddt_zap.c, ddt.h
--matt
___
This is RFE 6425091 want 'zfs diff' to list files that have changed between
snapshots, which covers both file directory changes, and file
removal/creation/renaming. We actually have a prototype of zfs diff.
Hopefully someday we will finish it up...
--matt
Henu wrote:
Hello
Is there a
John Meyer wrote:
Looks like this part got cut off somehow:
the filesystem mount point is set to /usr/local/local. I just want to
do a simple backup/restore, can anyone tell me something obvious that I'm not
doing right?
Using OpenSolaris development build 130.
Sounds like bug 6916662,
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it
Gaëtan Lehmann wrote:
Hi,
On opensolaris, I use du with the -b option to get the uncompressed size
of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/local/
915M/usr/local/
r...@opensolaris:~# zfs list -o
Len Zaifman wrote:
We have just update a major file server to solaris 10 update 9 so that we can
control user and group disk usage on a single filesystem.
We were using qfs and one nice thing about samquota was that it told you your
soft limit, your hard limit and your usage on disk space and
Brandon High wrote:
I'm playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that's already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
If you did not do zfs set dedup=fletcher4,verify fs (which is available
in build 128 and nightly bits since then), you can ignore this message.
We have changed the on-disk format of the pool when using
dedup=fletcher4,verify with the integration of:
6903705 dedup=fletcher4,verify doesn't
Andrew Gabriel wrote:
Kjetil Torgrim Homme wrote:
Daniel Carosone d...@geek.com.au writes:
Would there be a way to avoid taking snapshots if they're going to be
zero-sized?
I don't think it is easy to do, the txg counter is on a pool level,
AFAIK:
# zdb -u spool
Uberblock
functionality has been removed. We will investigate
whether it's possible to fix these isses and re-enable this functionality.
--matt
Matthew Ahrens wrote:
If you did not do zfs set dedup=fletcher4,verify fs (which is
available in build 128 and nightly bits since then), you can ignore this
message
Peter Wilk wrote:
tank/appswill be mounted as /apps -- need to be set with 10G
tank/apps/data1 will need to be mount as /apps/data1, need to be set
with 20G alone.
The question is:
If refquota is being used to set the filesystem sizes on /apps and
/apps/data1. /apps/data1 will not be
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has a user or group quota.
applied to a clone I understand what that means, applied to a
snapshot - not so clear does it mean enforced on the original dataset?
The user/group used can be out of date by a few seconds, same as the used
and referenced properties. You can run sync(1M) to wait for these values
to be updated. However, that doesn't seem to be the problem you are
encountering here.
Can you send me the output of:
zfs list zpool1/sd01_mail
Alastair Neil wrote:
On Tue, Oct 20, 2009 at 12:12 PM, Matthew Ahrens matthew.ahr...@sun.com
mailto:matthew.ahr...@sun.com wrote:
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been removed.
They have been removed from the namespace, but they are still open, eg due to
some
Tomas Ögren wrote:
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been
removed. They have
Thanks for reporting this. I have fixed this bug (6822816) in build
127. Here is the evaluation from the bug report:
The problem is that the clone's dsobj does not appear in the origin's
ds_next_clones_obj.
The bug can occur can occur under certain circumstances if there was a
botched
Brandon,
Yes, this is something that should be possible once we have bp rewrite (the
ability to move blocks around). One minor downside to hot space would be
that it couldn't be shared among multiple pools the way that hot spares can.
Also depending on the pool configuration, hot space may
Erik Trimble wrote:
From a global perspective, multi-disk parity (e.g. raidz2 or raidz3) is
the way to go instead of hot spares.
Hot spares are useful for adding protection to a number of vdevs, not a
single vdev.
Even when using raidz2 or 3, it is useful to have hot spares so that
Tristan Ball wrote:
OK, Thanks for that.
From reading the RFE, it sound's like having a faster machine on the
receive side will be enough to alleviate the problem in the short term?
That's correct.
--matt
___
zfs-discuss mailing list
Tristan Ball wrote:
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as of
today, the receiving zfs process has started running extremely slowly,
and is running at 100% CPU on one core,
Brian Kolaci wrote:
So Sun would see increased hardware revenue stream if they would just
listen to the customer... Without [pool shrink], they look for alternative
hardware/software vendors.
Just to be clear, Sun and the ZFS team are listening to customers on this
issue. Pool shrink has
Jorgen Lundman wrote:
Oh I forgot the more important question.
Importing all the user quota settings; Currently as a long file of zfs
set commands, which is taking a really long time. For example,
yesterday's import is still running.
Are there bulk-import solutions? Like zfs set -f file.txt
Jorgen Lundman wrote:
I have been playing around with osol-nv-b114 version, and the ZFS user
and group quotas.
First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone
else involved).
Thanks for the feedback!
I was unable to get ZFS quota to work with rquota. (Ie, NFS mount
Edward Pilatowicz wrote:
hey all,
so recently i wrote some zones code to manage zones on zfs datasets.
the code i wrote did things like rename snapshots and promote
filesystems. while doing this work, i found a few zfs behaviours that,
if changed, could greatly simplify my work.
the primary
Joep Vesseur wrote:
I was wondering why zfs destroy -r is so excruciatingly slow compared to
parallel destroys.
This issue is bug # 6631178.
The problem is that zfs destroy -r filesystem destroys each filesystem
and snapshot individually, and each one must wait for a txg to sync (0.1 - 10
Paul Kraus wrote:
Sorry in advance if this has already been discussed, but I did not
find it in my archives of the list.
According to the ZFS documentation, a resilver operation
includes what is effectively a dirty region log (DRL) so that if the
resilver is interrupted, by a snapshot
Ed,
zfs destroy [-r] -p sounds great.
I'm not a big fan of the -t template. Do you have conflicting snapshot
names due to the way your (zones) software works, or are you concerned about
sysadmins creating these conflicting snapshots? If it's the former, would
it be possible to change the
Enrico Maria Crisostomo wrote:
# zfs send -R -I @20090329 mypool/m...@20090330 | zfs recv -F -d
anotherpool/anotherfs
I experienced core dumps and the error message was:
internal error: Arg list too long
Abort (core dumped)
This is 6801979, fixed in build 111.
--matt
Robert Milkowski wrote:
Hello Matthew,
Tuesday, March 31, 2009, 9:16:42 PM, you wrote:
MA Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted
Mike Gerdts wrote:
On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over
NFS;
you must be on the machine running zfs to manipulate them.
does this mean
Microsystems
1. Introduction
1.1. Project/Component Working Name:
ZFS user/group quotas space accounting
1.2. Name of Document Author/Supplier:
Author: Matthew Ahrens
1.3 Date of This Document:
30 March, 2009
4. Technical Description
ZFS user/group space
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
]
The compressed space *is* the amount of
Nicolas Williams wrote:
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The user or group is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work
Nicolas Williams wrote:
We could also
disallow them from doing zfs get useru...@name pool/zoned/fs, just make
it an error to prevent them from seeing something other than what they
intended.
I don't see why the g-z admin should not get this data.
They can of course still get the data by
Tomas Ögren wrote:
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk use / quota?
That's
José Gomes wrote:
Can we assume that any snapshot listed by either 'zfs list -t snapshot'
or 'ls .zfs/snapshot' and previously created with 'zfs receive' is
complete and correct? Or is it possible for a 'zfs receive' command to
fail (corrupt/truncated stream, sigpipe, etc...) and a corrupt or
Jorgen Lundman wrote:
In the style of a discussion over a beverage, and talking about
user-quotas on ZFS, I recently pondered a design for implementing user
quotas on ZFS after having far too little sleep.
It is probably nothing new, but I would be curious what you experts
think of the
Bob Friesenhahn wrote:
On Thu, 12 Mar 2009, Jorgen Lundman wrote:
User-land will then have a daemon, whether or not it is one daemon per
file-system or really just one daemon does not matter. This process
will open '/dev/quota' and empty the transaction log entries
constantly. Take the
Gavin Maltby wrote:
Hi,
The manpage says
Specifically, used = usedbychildren + usedbydataset +
usedbyrefreservation +, usedbysnapshots. These proper-
ties are only available for datasets created on zpool
version 13 pools.
.. and I now realize that
Jorgen Lundman wrote:
Great! Will there be any particular limits on how many uids, or size of
uids in your implementation? UFS generally does not, but I did note that
if uid go over 1000 it flips out and changes the quotas file to
128GB in size.
All UIDs, as well as SIDs (from the SMB
Greg Mason wrote:
Just my $0.02, but would pool shrinking be the same as vdev evacuation?
Yes.
basically, what I'm thinking is:
zpool remove mypool list of devices/vdevs
Allow time for ZFS to vacate the vdev(s), and then light up the OK to
remove light on each evacuated disk.
That's the
David Magda wrote:
Given the threads that have appeared on this list lately, how about
codifying / standardizing the output of zfs send so that it can be
backed up to tape? :)
We will soon be changing the manpage to indicate that the zfs send stream
will be receivable on all future versions
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
install to mirror
Andreas Koppenhoefer wrote:
Hello,
occasionally we got some solaris 10 server to panic in zfs code while doing
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive
poolname.
The race condition(s) get triggered by a broken data transmission or killing
sending zfs or ssh
Are you sure that you don't have any refreservations?
--matt
Paul wrote:
I apologize for lack of info regarding to previous post.
# zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
gwvm_zpool 3.35T 3.16T 190G94% ONLINE -
rpool 135G 27.5G
Ian,
I couldn't find any bugs with a similar stack trace. Can you file a bug?
--matt
Ian Collins wrote:
The system was an x4540 running Solaris 10 Update 6 acting as a
production Samba server.
The only unusual activity was me sending and receiving incremental dumps
to and from another
Ben Rockwood wrote:
I've been struggling to fully understand why disk space seems to vanish.
I've dug through bits of code and reviewed all the mails on the subject that
I can find, but I still don't have a proper understanding of whats going on.
I did a test with a local zpool on
Indeed. This happens when the scrub started in the future according to the
timestamp. Then we get a negative amount of time passed, which gets printed
like this. We should check for this and at least print a more useful message.
--matt
Sanjeev Bagewadi wrote:
Mike,
Indeed an interesting
Robert Lawhead wrote:
Apologies up front for failing to find related posts...
Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL
PROTECTED] | zfs receive -n -v ...' to show the contents of the stream? I'm
looking for the equivalent of ufsdump 1f - fs ... | ufsrestore tv -
Sumit Gupta wrote:
The /dev/[r]dsk nodes implement the O_EXCL flag. If a node is opened using
the O_EXCL, subsequent open(2) to that node fail. But I dont think the
same is true for /dev/zvol/[r]dsk nodes. Is that a bug (or maybe RFE) ?
Yes, that seems like a fine RFE. Or a bug, if there's
John wrote:
This is one feature I've been hoping for... old threads and blogs talk about
this feature possibly showing up by the end of 2007 just curious on what
the status of this feature is...
It's still a high priority on our road map, just pushed back a bit. Our
current goal is to
Richard Elling wrote:
Paul B. Henson wrote:
On Fri, 12 Oct 2007, Paul B. Henson wrote:
I've read a number of threads and blog posts discussing zfs send/receive
and its applicability is such an implementation, but I'm curious if
anyone has actually done something like that in practice, and if
Edward Pilatowicz wrote:
hey all,
so i'm trying to mirror the contents of one zpool to another
using zfs send / recieve while maintaining all snapshots and clones.
You will enjoy the upcoming zfs send -R feature, which will make your
script unnecessary.
[EMAIL PROTECTED] zfs send -i 070221
Rahul Mehta wrote:
Has there been any solution to the problem discussed above in ZFS version 8??
We expect it to be fixed within a month. See:
http://opensolaris.org/os/community/arc/caselog/2007/555/
--matt
___
zfs-discuss mailing list
1 - 100 of 335 matches
Mail list logo