David Smith wrote:
What are your thoughts or recommendations on having a zpool made up of
raidz groups of different sizes? Are there going to be performance issues?
It should be fine. Under some circumstances the performance could be similar
to a pool with all raidz groups of the smallest
Orvar Korvar wrote:
Ive heard it is hard to give an correct estimate of the used bytes in ZFS,
because of this and that. It gives you only an approximate number. I think
Ive read that in the ZFS administration guide somewhere in the zpool
status or zfs list command?
That is not correct; the
multiple threads. That
said, feel free to experiment.
I guess you should check with Matthew Ahrens as IIRC he's working on
'zfs send -r' and possibly some other improvements to zfs send. The
question is what code changes Matthew has done so far (it hasn't been
integrated AFAIK) and possibly work
Łukasz wrote:
You're right that we need to issue more i/os in
parallel -- see 6333409
traversal code should be able to issue multiple
reads in parallel
When do you think it will be available ?
Perhaps by the end of the calendar year, but perhaps longer. Maybe sooner
if you work on it
Łukasz K wrote:
Hello Matthew,
I have problems with pool fragmentation.
http://www.opensolaris.org/jive/thread.jspa?threadID=34810
Now I want to speed up zfs send, because our pool space maps are
huge - after sending space maps will be smaller ( from 1GB - 50MB ).
As I understand I
Robert Milkowski wrote:
Hello Matthew,
Monday, June 18, 2007, 7:28:35 PM, you wrote:
MA FYI, we're already working with engineers on some other ports to ensure
MA on-disk compatability. Those changes are going smoothly. So please,
MA contact us if you want to make (or want us to make)
Kevin wrote:
After a scrub of a pool with 3 raidz2 vdevs (each with 5 disks in them) I see
the following status output. Notice that the raidz2 vdev has 2 checksum
errors, but only one disk inside the raidz2 vdev has a checksum error. How is
this possible? I thought that you would have to
Roger,
Could you send us (off-list is fine) the output of truss ls -l file? And
also, the output of zdb -vvv containing-filesystem? (which will compress
well with gzip if it's huge.)
thanks,
--matt
Roger Fujii wrote:
This is on a sol10u3 box. I could boot snv temporarily on this box if
Shannon Fiume wrote:
Hi,
I want to send peices of a zfs filesystem to another system. Can zfs
send peices of a snapshot? Say I only want to send over /[EMAIL PROTECTED]
and
not include /app/conf data while /app/conf is still apart of the
/[EMAIL PROTECTED] snapshot? I say app/conf as
Krzys wrote:
Hello everyone, I am slowly running out of space in my zpool.. so I wanted to
replace my zpool with a different zpool..
my current zpool is
zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mypool 278G263G 14.7G94%
Blake wrote:
Now I'm curious.
I was recursively removing snapshots that had been generated recursively
with the '-r' option. I'm running snv65 - is this a recent feature?
No; it was integrated in snv_43, and is in s10u3. See:
PSARC 2006/388 snapshot -r
6373978 want to take lots of
Yaniv Aknin wrote:
When volumes approach 90% usage, and under medium/light load (zpool iostat
reports 50mb/s and 750iops reads), some creat64 system calls take over 50
seconds to complete (observed with 'truss -D touch'). When doing manual
tests, I've seen similar times on unlink() calls
Marko Milisavljevic wrote:
Hmm.. my b69 installation understands zfs allow, but man zfs has no info
at all.
Usually the manpages are updated in the same build as a new feature is added,
but the delegated admin manpage changes were extensive and slipped to build 70.
--matt
Brandorr wrote:
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
Do you mean efficient in terms of space used? If so, then in general it is
quite efficient. Eg, files 128k space is
Ralf Ramge wrote:
I consider this a big design flaw of ZFS.
Are you saying that it's a design flaw of ZFS that we haven't yet implemented
remote replication? I would consider that a missing feature, not a design
flaw. There's nothing in the design of ZFS to prevent such a feature (and in
Brad Plecs wrote:
I hate to start rsyncing again, but may be forced to; policing the snapshot
space consumption is
getting painful, but the online snapshot feature is too valuable to discard
altogether.
or if there are other creative solutions, I'm all ears...
OK, you asked for
Igor Brezac wrote:
We are on Solaris 10 U3 with relatively recent recommended patches applied.
zfs destroy of a filesystem takes a very long time; 20GB usage and about 5
million objects takes about 10 minutes to destroy. zfs pool is a 2 drive
stripe, nothing too fancy. We do not have any
Stuart Anderson wrote:
Before I open a new case with Sun, I am wondering if anyone has seen this
kernel panic before? It happened on an X4500 running Sol10U3 while it was
receiving incremental snapshot updates.
Looks like it could be 6569719, which we expect to be fixed (in OpenSolaris)
Joerg Schilling wrote:
The best documented one is the inverted meta data tree that allows wofs to
write
only one new generation node for one modified file while ZFS needs to also
write new
nodes for all directories above the file including the root directory in the
fs.
I believe you are
Atul Vidwansa wrote:
ZFS Experts,
Is it possible to use DMU as general purpose transaction engine? More
specifically, in following order:
1. Create transaction:
tx = dmu_tx_create(os);
error = dmu_tx_assign(tx, TXG_WAIT)
2. Decide what to modify(say create new object):
Joerg Schilling wrote:
Matthew Ahrens [EMAIL PROTECTED] wrote:
Joerg Schilling wrote:
The best documented one is the inverted meta data tree that allows wofs to
write
only one new generation node for one modified file while ZFS needs to also
write new
nodes for all directories above
msl wrote:
Hello all,
There is a way to configure the zpool to legacy_mount, and have all
filesystems in that pool mounted automatically?
I will try explain better:
- Imagine that i have a zfs pool with 1000 filesystems.
- I want to control the mount/unmount of that pool, so, i did
MC wrote:
With the arrival of ZFS, the format command is well on its way to
deprecation station. But how else do you list the devices that zpool can
create pools out of?
Would it be reasonable to enhance zpool to list the vdevs that are available
to it? Perhaps as part of the help
If you haven't resolved this bug with the storage folks, you can file a bug
at http://bugs.opensolaris.org/
--matt
eric kustarz wrote:
This actually looks like a sd bug... forwarding it to the storage
alias to see if anyone has seen this...
eric
On Sep 14, 2007, at 12:42 PM, J Duff
Tim Spriggs wrote:
I think they are listed in order with zfs list.
That's correct, they are listed in the order taken, from oldest to newest.
--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Solaris wrote:
Greetings.
I applied the Recommended Patch Cluster including 120012-14 to a U3
system today. I upgraded my zpool and it seems like we have some very
strange information coming from zpool list and zfs list...
[EMAIL PROTECTED]:/]# zfs list
NAMEUSED AVAIL
Max,
Glad you figured out where your problem was. Compression does complicate
things. Also, make sure you have the most recent (highest txg) uberblock.
Just for the record, using MDB to print out ZFS data structures is totally
sweet! We have actually been wanting to do that for about 5
?ukasz wrote:
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems )
and it
MC wrote:
Re: http://bugs.opensolaris.org/view_bug.do?bug_id=6602947
Specifically this part:
[i]Create zpool /testpool/. Create zfs file system /testpool/testfs.
Right click on /testpool/testfs (filesystem) in nautilus and rename to
testfs2.
Do zfs list. Note that only
Jesus Cea wrote:
Read performance [when using zfs set copies=2 vs a mirror] would double,
and this is very nice
I don't see how that could be the case. Either way, the reads should be able
to fan out over the two disks.
--matt
___
zfs-discuss
Jesus Cea wrote:
Would ZFS boot be able to boot from a copies boot dataset, when one of
the disks are failing?. Counting that ditto blocks are spread between
both disks, of course.
You can not boot from a pool with multiple top-level vdevs (eg, the copies
pool you describe). We hope to
Jim Mauro wrote:
Hi Neel - Thanks for pushing this out. I've been tripping over this for
a while.
You can instrument zfs_read() and zfs_write() to reliably track filenames:
#!/usr/sbin/dtrace -s
#pragma D option quiet
zfs_read:entry,
zfs_write:entry
{
printf(%s of
Łukasz K wrote:
Now space maps, intent log, spa history are compressed.
All normal metadata (including space maps and spa history) is always
compressed. The intent log is never compressed.
Can you tell me where space map is compressed ?
we specify that it should be compressed in
Tim Thomas wrote:
Hi
this may be of interest:
http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
I appreciate that this is not a frightfully clever set of tests but I
needed some throughout numbersand the easiest way to share the
results is to blog.
It seems
Michael Kucharski wrote:
We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have
the files system mounted over v5 krb5 NFS and accessed directly. The pool
is a 20TB pool and is using . There are three filesystems, backup, test
and home. Test has about 20 million files and
Rahul Mehta wrote:
Has there been any solution to the problem discussed above in ZFS version 8??
We expect it to be fixed within a month. See:
http://opensolaris.org/os/community/arc/caselog/2007/555/
--matt
___
zfs-discuss mailing list
Edward Pilatowicz wrote:
hey all,
so i'm trying to mirror the contents of one zpool to another
using zfs send / recieve while maintaining all snapshots and clones.
You will enjoy the upcoming zfs send -R feature, which will make your
script unnecessary.
[EMAIL PROTECTED] zfs send -i 070221
Richard Elling wrote:
Paul B. Henson wrote:
On Fri, 12 Oct 2007, Paul B. Henson wrote:
I've read a number of threads and blog posts discussing zfs send/receive
and its applicability is such an implementation, but I'm curious if
anyone has actually done something like that in practice, and if
John wrote:
This is one feature I've been hoping for... old threads and blogs talk about
this feature possibly showing up by the end of 2007 just curious on what
the status of this feature is...
It's still a high priority on our road map, just pushed back a bit. Our
current goal is to
Sumit Gupta wrote:
The /dev/[r]dsk nodes implement the O_EXCL flag. If a node is opened using
the O_EXCL, subsequent open(2) to that node fail. But I dont think the
same is true for /dev/zvol/[r]dsk nodes. Is that a bug (or maybe RFE) ?
Yes, that seems like a fine RFE. Or a bug, if there's
Robert Lawhead wrote:
Apologies up front for failing to find related posts...
Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL
PROTECTED] | zfs receive -n -v ...' to show the contents of the stream? I'm
looking for the equivalent of ufsdump 1f - fs ... | ufsrestore tv -
Indeed. This happens when the scrub started in the future according to the
timestamp. Then we get a negative amount of time passed, which gets printed
like this. We should check for this and at least print a more useful message.
--matt
Sanjeev Bagewadi wrote:
Mike,
Indeed an interesting
Ben Rockwood wrote:
I've been struggling to fully understand why disk space seems to vanish.
I've dug through bits of code and reviewed all the mails on the subject that
I can find, but I still don't have a proper understanding of whats going on.
I did a test with a local zpool on
Andreas Koppenhoefer wrote:
Hello,
occasionally we got some solaris 10 server to panic in zfs code while doing
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive
poolname.
The race condition(s) get triggered by a broken data transmission or killing
sending zfs or ssh
Are you sure that you don't have any refreservations?
--matt
Paul wrote:
I apologize for lack of info regarding to previous post.
# zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
gwvm_zpool 3.35T 3.16T 190G94% ONLINE -
rpool 135G 27.5G
Ian,
I couldn't find any bugs with a similar stack trace. Can you file a bug?
--matt
Ian Collins wrote:
The system was an x4540 running Solaris 10 Update 6 acting as a
production Samba server.
The only unusual activity was me sending and receiving incremental dumps
to and from another
David Magda wrote:
Given the threads that have appeared on this list lately, how about
codifying / standardizing the output of zfs send so that it can be
backed up to tape? :)
We will soon be changing the manpage to indicate that the zfs send stream
will be receivable on all future versions
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
install to mirror
Greg Mason wrote:
Just my $0.02, but would pool shrinking be the same as vdev evacuation?
Yes.
basically, what I'm thinking is:
zpool remove mypool list of devices/vdevs
Allow time for ZFS to vacate the vdev(s), and then light up the OK to
remove light on each evacuated disk.
That's the
Jorgen Lundman wrote:
In the style of a discussion over a beverage, and talking about
user-quotas on ZFS, I recently pondered a design for implementing user
quotas on ZFS after having far too little sleep.
It is probably nothing new, but I would be curious what you experts
think of the
Bob Friesenhahn wrote:
On Thu, 12 Mar 2009, Jorgen Lundman wrote:
User-land will then have a daemon, whether or not it is one daemon per
file-system or really just one daemon does not matter. This process
will open '/dev/quota' and empty the transaction log entries
constantly. Take the
Gavin Maltby wrote:
Hi,
The manpage says
Specifically, used = usedbychildren + usedbydataset +
usedbyrefreservation +, usedbysnapshots. These proper-
ties are only available for datasets created on zpool
version 13 pools.
.. and I now realize that
Jorgen Lundman wrote:
Great! Will there be any particular limits on how many uids, or size of
uids in your implementation? UFS generally does not, but I did note that
if uid go over 1000 it flips out and changes the quotas file to
128GB in size.
All UIDs, as well as SIDs (from the SMB
José Gomes wrote:
Can we assume that any snapshot listed by either 'zfs list -t snapshot'
or 'ls .zfs/snapshot' and previously created with 'zfs receive' is
complete and correct? Or is it possible for a 'zfs receive' command to
fail (corrupt/truncated stream, sigpipe, etc...) and a corrupt or
Microsystems
1. Introduction
1.1. Project/Component Working Name:
ZFS user/group quotas space accounting
1.2. Name of Document Author/Supplier:
Author: Matthew Ahrens
1.3 Date of This Document:
30 March, 2009
4. Technical Description
ZFS user/group space
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
]
The compressed space *is* the amount of
Nicolas Williams wrote:
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The user or group is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work
Nicolas Williams wrote:
We could also
disallow them from doing zfs get useru...@name pool/zoned/fs, just make
it an error to prevent them from seeing something other than what they
intended.
I don't see why the g-z admin should not get this data.
They can of course still get the data by
Tomas Ögren wrote:
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk use / quota?
That's
Robert Milkowski wrote:
Hello Matthew,
Tuesday, March 31, 2009, 9:16:42 PM, you wrote:
MA Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted
Mike Gerdts wrote:
On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over
NFS;
you must be on the machine running zfs to manipulate them.
does this mean
Enrico Maria Crisostomo wrote:
# zfs send -R -I @20090329 mypool/m...@20090330 | zfs recv -F -d
anotherpool/anotherfs
I experienced core dumps and the error message was:
internal error: Arg list too long
Abort (core dumped)
This is 6801979, fixed in build 111.
--matt
Ed,
zfs destroy [-r] -p sounds great.
I'm not a big fan of the -t template. Do you have conflicting snapshot
names due to the way your (zones) software works, or are you concerned about
sysadmins creating these conflicting snapshots? If it's the former, would
it be possible to change the
Paul Kraus wrote:
Sorry in advance if this has already been discussed, but I did not
find it in my archives of the list.
According to the ZFS documentation, a resilver operation
includes what is effectively a dirty region log (DRL) so that if the
resilver is interrupted, by a snapshot
Edward Pilatowicz wrote:
hey all,
so recently i wrote some zones code to manage zones on zfs datasets.
the code i wrote did things like rename snapshots and promote
filesystems. while doing this work, i found a few zfs behaviours that,
if changed, could greatly simplify my work.
the primary
Joep Vesseur wrote:
I was wondering why zfs destroy -r is so excruciatingly slow compared to
parallel destroys.
This issue is bug # 6631178.
The problem is that zfs destroy -r filesystem destroys each filesystem
and snapshot individually, and each one must wait for a txg to sync (0.1 - 10
Jorgen Lundman wrote:
I have been playing around with osol-nv-b114 version, and the ZFS user
and group quotas.
First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone
else involved).
Thanks for the feedback!
I was unable to get ZFS quota to work with rquota. (Ie, NFS mount
Jorgen Lundman wrote:
Oh I forgot the more important question.
Importing all the user quota settings; Currently as a long file of zfs
set commands, which is taking a really long time. For example,
yesterday's import is still running.
Are there bulk-import solutions? Like zfs set -f file.txt
Brian Kolaci wrote:
So Sun would see increased hardware revenue stream if they would just
listen to the customer... Without [pool shrink], they look for alternative
hardware/software vendors.
Just to be clear, Sun and the ZFS team are listening to customers on this
issue. Pool shrink has
Tristan Ball wrote:
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as of
today, the receiving zfs process has started running extremely slowly,
and is running at 100% CPU on one core,
Tristan Ball wrote:
OK, Thanks for that.
From reading the RFE, it sound's like having a faster machine on the
receive side will be enough to alleviate the problem in the short term?
That's correct.
--matt
___
zfs-discuss mailing list
Brandon,
Yes, this is something that should be possible once we have bp rewrite (the
ability to move blocks around). One minor downside to hot space would be
that it couldn't be shared among multiple pools the way that hot spares can.
Also depending on the pool configuration, hot space may
Erik Trimble wrote:
From a global perspective, multi-disk parity (e.g. raidz2 or raidz3) is
the way to go instead of hot spares.
Hot spares are useful for adding protection to a number of vdevs, not a
single vdev.
Even when using raidz2 or 3, it is useful to have hot spares so that
Thanks for reporting this. I have fixed this bug (6822816) in build
127. Here is the evaluation from the bug report:
The problem is that the clone's dsobj does not appear in the origin's
ds_next_clones_obj.
The bug can occur can occur under certain circumstances if there was a
botched
Peter Wilk wrote:
tank/appswill be mounted as /apps -- need to be set with 10G
tank/apps/data1 will need to be mount as /apps/data1, need to be set
with 20G alone.
The question is:
If refquota is being used to set the filesystem sizes on /apps and
/apps/data1. /apps/data1 will not be
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has a user or group quota.
applied to a clone I understand what that means, applied to a
snapshot - not so clear does it mean enforced on the original dataset?
The user/group used can be out of date by a few seconds, same as the used
and referenced properties. You can run sync(1M) to wait for these values
to be updated. However, that doesn't seem to be the problem you are
encountering here.
Can you send me the output of:
zfs list zpool1/sd01_mail
Alastair Neil wrote:
On Tue, Oct 20, 2009 at 12:12 PM, Matthew Ahrens matthew.ahr...@sun.com
mailto:matthew.ahr...@sun.com wrote:
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been removed.
They have been removed from the namespace, but they are still open, eg due to
some
Tomas Ögren wrote:
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been
removed. They have
If you did not do zfs set dedup=fletcher4,verify fs (which is available
in build 128 and nightly bits since then), you can ignore this message.
We have changed the on-disk format of the pool when using
dedup=fletcher4,verify with the integration of:
6903705 dedup=fletcher4,verify doesn't
Andrew Gabriel wrote:
Kjetil Torgrim Homme wrote:
Daniel Carosone d...@geek.com.au writes:
Would there be a way to avoid taking snapshots if they're going to be
zero-sized?
I don't think it is easy to do, the txg counter is on a pool level,
AFAIK:
# zdb -u spool
Uberblock
functionality has been removed. We will investigate
whether it's possible to fix these isses and re-enable this functionality.
--matt
Matthew Ahrens wrote:
If you did not do zfs set dedup=fletcher4,verify fs (which is
available in build 128 and nightly bits since then), you can ignore this
message
Brandon High wrote:
I'm playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that's already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
Len Zaifman wrote:
We have just update a major file server to solaris 10 update 9 so that we can
control user and group disk usage on a single filesystem.
We were using qfs and one nice thing about samquota was that it told you your
soft limit, your hard limit and your usage on disk space and
Gaëtan Lehmann wrote:
Hi,
On opensolaris, I use du with the -b option to get the uncompressed size
of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/local/
915M/usr/local/
r...@opensolaris:~# zfs list -o
John Meyer wrote:
Looks like this part got cut off somehow:
the filesystem mount point is set to /usr/local/local. I just want to
do a simple backup/restore, can anyone tell me something obvious that I'm not
doing right?
Using OpenSolaris development build 130.
Sounds like bug 6916662,
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it
This is RFE 6425091 want 'zfs diff' to list files that have changed between
snapshots, which covers both file directory changes, and file
removal/creation/renaming. We actually have a prototype of zfs diff.
Hopefully someday we will finish it up...
--matt
Henu wrote:
Hello
Is there a
Tom Hall wrote:
Re the DDT, can someone outline it's structure please? Some sort of
hash table? The blogs I have read so far dont specify.
It is stored in a ZAP object, which is an extensible hash table. See
zap.[ch], ddt_zap.c, ddt.h
--matt
___
Jordan Schwartz wrote:
ZFSfolk,
Pardon the slightly offtopic post, but I figured this would be a good
forum to get some feedback.
I am looking at implementing zfs group quotas on some X4540s and
X4140/J4400s, 64GB of RAM per server, running Solaris 10 Update 8
servers with IDR143158-06.
There
That's correct.
This behavior is because the send|recv operates on the DMU objects,
whereas the recordsize property is interpreted by the ZPL. The ZPL
checks the recordsize property when a file grows. But the recv
doesn't grow any files, it just dumps data into the underlying
objects.
--matt
I verified that this bug exists in OpenSolaris as well. The problem is that
we can't destroy the old filesystem a (which has been renamed to
rec2/recv-2176-1 in this case). We can't destroy it because it has a
child, b. We need to rename b to be under the new a. However, we are
not renaming
I verified that this bug exists in OpenSolaris as well. The problem is that
we can't destroy the old filesystem a (which has been renamed to
rec2/recv-2176-1
in this case). We can't destroy it because it has a child, b. We need to
rename b to be under the new a. However, we are not renaming
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson don.jack...@gmail.com wrote:
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into
npool/open...@xfer-11292010
received 23.5GB stream in 883 seconds (27.3MB/sec)
usedsnap is the amount of space consumed by all snapshots. Ie, the
amount of space that would be recovered if all snapshots were to be
deleted.
The space used by any one snapshot is the space that would be
recovered if that snapshot was deleted. Ie, the amount of space that
is unique to that
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins i...@ianshome.com wrote:
On 12/10/10 12:31 PM, Moazam Raja wrote:
So, is it OK to send/recv while having the receive volume write enabled?
A write can fail if a filesystem is unmounted for update.
True, but ZFS recv will not normally unmount a
On Mon, Jan 10, 2011 at 2:40 PM, fred f...@mautadine.com wrote:
Hello,
I'm having a weird issue with my incremental setup.
Here is the filesystem as it shows up with zfs list:
NAME USED AVAIL REFER MOUNTPOINT
Data/FS1 771M 16.1T
On Thu, Jan 13, 2011 at 4:36 AM, fred f...@mautadine.com wrote:
Thanks for this explanation
So there is no real way to estimate the size of the increment?
Unfortunately not for now.
Anyway, for this particular filesystem, i'll stick with rsync and yes, the
difference was 50G!
Why? I
201 - 300 of 335 matches
Mail list logo