Re: [zfs-discuss] General recommendations on raidz groups of different sizes

2007-07-19 Thread Matthew Ahrens
David Smith wrote: What are your thoughts or recommendations on having a zpool made up of raidz groups of different sizes? Are there going to be performance issues? It should be fine. Under some circumstances the performance could be similar to a pool with all raidz groups of the smallest

Re: [zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-21 Thread Matthew Ahrens
Orvar Korvar wrote: Ive heard it is hard to give an correct estimate of the used bytes in ZFS, because of this and that. It gives you only an approximate number. I think Ive read that in the ZFS administration guide somewhere in the zpool status or zfs list command? That is not correct; the

Re: [zfs-discuss] ZFS send needs optimalization

2007-07-23 Thread Matthew Ahrens
multiple threads. That said, feel free to experiment. I guess you should check with Matthew Ahrens as IIRC he's working on 'zfs send -r' and possibly some other improvements to zfs send. The question is what code changes Matthew has done so far (it hasn't been integrated AFAIK) and possibly work

Re: [zfs-discuss] ZFS send needs optimalization

2007-07-24 Thread Matthew Ahrens
Łukasz wrote: You're right that we need to issue more i/os in parallel -- see 6333409 traversal code should be able to issue multiple reads in parallel When do you think it will be available ? Perhaps by the end of the calendar year, but perhaps longer. Maybe sooner if you work on it

Re: [zfs-discuss] ZFS send needs optimalization

2007-07-24 Thread Matthew Ahrens
Łukasz K wrote: Hello Matthew, I have problems with pool fragmentation. http://www.opensolaris.org/jive/thread.jspa?threadID=34810 Now I want to speed up zfs send, because our pool space maps are huge - after sending space maps will be smaller ( from 1GB - 50MB ). As I understand I

Re: [zfs-discuss] ZFS forks (Was: LZO compression?)

2007-07-26 Thread Matthew Ahrens
Robert Milkowski wrote: Hello Matthew, Monday, June 18, 2007, 7:28:35 PM, you wrote: MA FYI, we're already working with engineers on some other ports to ensure MA on-disk compatability. Those changes are going smoothly. So please, MA contact us if you want to make (or want us to make)

Re: [zfs-discuss] Mysterious corruption with raidz2 vdev (1 checksum err on disk, 2 on vdev?)

2007-07-27 Thread Matthew Ahrens
Kevin wrote: After a scrub of a pool with 3 raidz2 vdevs (each with 5 disks in them) I see the following status output. Notice that the raidz2 vdev has 2 checksum errors, but only one disk inside the raidz2 vdev has a checksum error. How is this possible? I thought that you would have to

Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Matthew Ahrens
Roger, Could you send us (off-list is fine) the output of truss ls -l file? And also, the output of zdb -vvv containing-filesystem? (which will compress well with gzip if it's huge.) thanks, --matt Roger Fujii wrote: This is on a sol10u3 box. I could boot snv temporarily on this box if

Re: [zfs-discuss] zfs send/recv partial filesystem?

2007-08-10 Thread Matthew Ahrens
Shannon Fiume wrote: Hi, I want to send peices of a zfs filesystem to another system. Can zfs send peices of a snapshot? Say I only want to send over /[EMAIL PROTECTED] and not include /app/conf data while /app/conf is still apart of the /[EMAIL PROTECTED] snapshot? I say app/conf as

Re: [zfs-discuss] zpool upgrade to more storage

2007-08-13 Thread Matthew Ahrens
Krzys wrote: Hello everyone, I am slowly running out of space in my zpool.. so I wanted to replace my zpool with a different zpool.. my current zpool is zpool list NAMESIZEUSED AVAILCAP HEALTH ALTROOT mypool 278G263G 14.7G94%

Re: [zfs-discuss] remove snapshots

2007-08-17 Thread Matthew Ahrens
Blake wrote: Now I'm curious. I was recursively removing snapshots that had been generated recursively with the '-r' option. I'm running snv65 - is this a recent feature? No; it was integrated in snv_43, and is in s10u3. See: PSARC 2006/388 snapshot -r 6373978 want to take lots of

Re: [zfs-discuss] Extremely long creat64 latencies on higly utilized zpools

2007-08-17 Thread Matthew Ahrens
Yaniv Aknin wrote: When volumes approach 90% usage, and under medium/light load (zpool iostat reports 50mb/s and 750iops reads), some creat64 system calls take over 50 seconds to complete (observed with 'truss -D touch'). When doing manual tests, I've seen similar times on unlink() calls

Re: [zfs-discuss] Privileges

2007-08-18 Thread Matthew Ahrens
Marko Milisavljevic wrote: Hmm.. my b69 installation understands zfs allow, but man zfs has no info at all. Usually the manpages are updated in the same build as a new feature is added, but the delegated admin manpage changes were extensive and slipped to build 70. --matt

Re: [zfs-discuss] Is ZFS efficient for large collections of small files?

2007-08-20 Thread Matthew Ahrens
Brandorr wrote: Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? Do you mean efficient in terms of space used? If so, then in general it is quite efficient. Eg, files 128k space is

Re: [zfs-discuss] Mirrored zpool across network

2007-08-22 Thread Matthew Ahrens
Ralf Ramge wrote: I consider this a big design flaw of ZFS. Are you saying that it's a design flaw of ZFS that we haven't yet implemented remote replication? I would consider that a missing feature, not a design flaw. There's nothing in the design of ZFS to prevent such a feature (and in

Re: [zfs-discuss] ZFS quota

2007-08-22 Thread Matthew Ahrens
Brad Plecs wrote: I hate to start rsyncing again, but may be forced to; policing the snapshot space consumption is getting painful, but the online snapshot feature is too valuable to discard altogether. or if there are other creative solutions, I'm all ears... OK, you asked for

Re: [zfs-discuss] zfs destroy takes long time

2007-08-23 Thread Matthew Ahrens
Igor Brezac wrote: We are on Solaris 10 U3 with relatively recent recommended patches applied. zfs destroy of a filesystem takes a very long time; 20GB usage and about 5 million objects takes about 10 minutes to destroy. zfs pool is a 2 drive stripe, nothing too fancy. We do not have any

Re: [zfs-discuss] Kernel panic receiving incremental snapshots

2007-08-25 Thread Matthew Ahrens
Stuart Anderson wrote: Before I open a new case with Sun, I am wondering if anyone has seen this kernel panic before? It happened on an X4500 running Sol10U3 while it was receiving incremental snapshot updates. Looks like it could be 6569719, which we expect to be fixed (in OpenSolaris)

[zfs-discuss] cascading metadata modifications

2007-09-05 Thread Matthew Ahrens
Joerg Schilling wrote: The best documented one is the inverted meta data tree that allows wofs to write only one new generation node for one modified file while ZFS needs to also write new nodes for all directories above the file including the root directory in the fs. I believe you are

Re: [zfs-discuss] [zfs-code] DMU as general purpose transaction engine?

2007-09-05 Thread Matthew Ahrens
Atul Vidwansa wrote: ZFS Experts, Is it possible to use DMU as general purpose transaction engine? More specifically, in following order: 1. Create transaction: tx = dmu_tx_create(os); error = dmu_tx_assign(tx, TXG_WAIT) 2. Decide what to modify(say create new object):

Re: [zfs-discuss] cascading metadata modifications

2007-09-06 Thread Matthew Ahrens
Joerg Schilling wrote: Matthew Ahrens [EMAIL PROTECTED] wrote: Joerg Schilling wrote: The best documented one is the inverted meta data tree that allows wofs to write only one new generation node for one modified file while ZFS needs to also write new nodes for all directories above

Re: [zfs-discuss] zfs mount points (all-or-nothing)

2007-09-21 Thread Matthew Ahrens
msl wrote: Hello all, There is a way to configure the zpool to legacy_mount, and have all filesystems in that pool mounted automatically? I will try explain better: - Imagine that i have a zfs pool with 1000 filesystems. - I want to control the mount/unmount of that pool, so, i did

Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-10-09 Thread Matthew Ahrens
MC wrote: With the arrival of ZFS, the format command is well on its way to deprecation station. But how else do you list the devices that zpool can create pools out of? Would it be reasonable to enhance zpool to list the vdevs that are available to it? Perhaps as part of the help

Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-09 Thread Matthew Ahrens
If you haven't resolved this bug with the storage folks, you can file a bug at http://bugs.opensolaris.org/ --matt eric kustarz wrote: This actually looks like a sd bug... forwarding it to the storage alias to see if anyone has seen this... eric On Sep 14, 2007, at 12:42 PM, J Duff

Re: [zfs-discuss] zfs snapshot timestamp info

2007-10-09 Thread Matthew Ahrens
Tim Spriggs wrote: I think they are listed in order with zfs list. That's correct, they are listed in the order taken, from oldest to newest. --matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs / zpool list odd results in U4

2007-10-09 Thread Matthew Ahrens
Solaris wrote: Greetings. I applied the Recommended Patch Cluster including 120012-14 to a U3 system today. I upgraded my zpool and it seems like we have some very strange information coming from zpool list and zfs list... [EMAIL PROTECTED]:/]# zfs list NAMEUSED AVAIL

Re: [zfs-discuss] question about uberblock blkptr

2007-10-09 Thread Matthew Ahrens
Max, Glad you figured out where your problem was. Compression does complicate things. Also, make sure you have the most recent (highest txg) uberblock. Just for the record, using MDB to print out ZFS data structures is totally sweet! We have actually been wanting to do that for about 5

Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-09 Thread Matthew Ahrens
?ukasz wrote: I have a huge problem with space maps on thumper. Space maps takes over 3GB and write operations generates massive read operations. Before every spa sync phase zfs reads space maps from disk. I decided to turn on compression for pool ( only for pool, not filesystems ) and it

Re: [zfs-discuss] Does bug 6602947 concern ZFS more than Gnome?

2007-10-09 Thread Matthew Ahrens
MC wrote: Re: http://bugs.opensolaris.org/view_bug.do?bug_id=6602947 Specifically this part: [i]Create zpool /testpool/. Create zfs file system /testpool/testfs. Right click on /testpool/testfs (filesystem) in nautilus and rename to testfs2. Do zfs list. Note that only

Re: [zfs-discuss] future ZFS Boot and ZFS copies

2007-10-09 Thread Matthew Ahrens
Jesus Cea wrote: Read performance [when using zfs set copies=2 vs a mirror] would double, and this is very nice I don't see how that could be the case. Either way, the reads should be able to fan out over the two disks. --matt ___ zfs-discuss

Re: [zfs-discuss] future ZFS Boot and ZFS copies

2007-10-09 Thread Matthew Ahrens
Jesus Cea wrote: Would ZFS boot be able to boot from a copies boot dataset, when one of the disks are failing?. Counting that ditto blocks are spread between both disks, of course. You can not boot from a pool with multiple top-level vdevs (eg, the copies pool you describe). We hope to

Re: [zfs-discuss] io:::start and zfs filenames?

2007-10-12 Thread Matthew Ahrens
Jim Mauro wrote: Hi Neel - Thanks for pushing this out. I've been tripping over this for a while. You can instrument zfs_read() and zfs_write() to reliably track filenames: #!/usr/sbin/dtrace -s #pragma D option quiet zfs_read:entry, zfs_write:entry { printf(%s of

Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-12 Thread Matthew Ahrens
Łukasz K wrote: Now space maps, intent log, spa history are compressed. All normal metadata (including space maps and spa history) is always compressed. The intent log is never compressed. Can you tell me where space map is compressed ? we specify that it should be compressed in

Re: [zfs-discuss] Some test results: ZFS + SAMBA + Sun Fire X4500 (Thumper)

2007-10-12 Thread Matthew Ahrens
Tim Thomas wrote: Hi this may be of interest: http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire I appreciate that this is not a frightfully clever set of tests but I needed some throughout numbersand the easiest way to share the results is to blog. It seems

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-12 Thread Matthew Ahrens
Michael Kucharski wrote: We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and

Re: [zfs-discuss] Inherited quota question

2007-10-13 Thread Matthew Ahrens
Rahul Mehta wrote: Has there been any solution to the problem discussed above in ZFS version 8?? We expect it to be fixed within a month. See: http://opensolaris.org/os/community/arc/caselog/2007/555/ --matt ___ zfs-discuss mailing list

Re: [zfs-discuss] strange zfs recieve behavior

2007-10-14 Thread Matthew Ahrens
Edward Pilatowicz wrote: hey all, so i'm trying to mirror the contents of one zpool to another using zfs send / recieve while maintaining all snapshots and clones. You will enjoy the upcoming zfs send -R feature, which will make your script unnecessary. [EMAIL PROTECTED] zfs send -i 070221

Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-16 Thread Matthew Ahrens
Richard Elling wrote: Paul B. Henson wrote: On Fri, 12 Oct 2007, Paul B. Henson wrote: I've read a number of threads and blog posts discussing zfs send/receive and its applicability is such an implementation, but I'm curious if anyone has actually done something like that in practice, and if

Re: [zfs-discuss] Status on shrinking zpool

2008-01-23 Thread Matthew Ahrens
John wrote: This is one feature I've been hoping for... old threads and blogs talk about this feature possibly showing up by the end of 2007 just curious on what the status of this feature is... It's still a high priority on our road map, just pushed back a bit. Our current goal is to

Re: [zfs-discuss] Using O_EXCL flag on /dev/zvol nodes

2008-05-16 Thread Matthew Ahrens
Sumit Gupta wrote: The /dev/[r]dsk nodes implement the O_EXCL flag. If a node is opened using the O_EXCL, subsequent open(2) to that node fail. But I dont think the same is true for /dev/zvol/[r]dsk nodes. Is that a bug (or maybe RFE) ? Yes, that seems like a fine RFE. Or a bug, if there's

Re: [zfs-discuss] zfs receive - list contents of incremental stream?

2008-06-09 Thread Matthew Ahrens
Robert Lawhead wrote: Apologies up front for failing to find related posts... Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | zfs receive -n -v ...' to show the contents of the stream? I'm looking for the equivalent of ufsdump 1f - fs ... | ufsrestore tv -

Re: [zfs-discuss] resilver running for 35 trillion years

2008-06-25 Thread Matthew Ahrens
Indeed. This happens when the scrub started in the future according to the timestamp. Then we get a negative amount of time passed, which gets printed like this. We should check for this and at least print a more useful message. --matt Sanjeev Bagewadi wrote: Mike, Indeed an interesting

Re: [zfs-discuss] Lost Disk Space

2008-11-04 Thread Matthew Ahrens
Ben Rockwood wrote: I've been struggling to fully understand why disk space seems to vanish. I've dug through bits of code and reviewed all the mails on the subject that I can find, but I still don't have a proper understanding of whats going on. I did a test with a local zpool on

Re: [zfs-discuss] Race condition yields to kernel panic (u3, u4) or hanging zfs commands (u5)

2008-11-13 Thread Matthew Ahrens
Andreas Koppenhoefer wrote: Hello, occasionally we got some solaris 10 server to panic in zfs code while doing zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive poolname. The race condition(s) get triggered by a broken data transmission or killing sending zfs or ssh

Re: [zfs-discuss] zvol snapshot at size 100G

2008-11-13 Thread Matthew Ahrens
Are you sure that you don't have any refreservations? --matt Paul wrote: I apologize for lack of info regarding to previous post. # zpool list NAMESIZE USED AVAILCAP HEALTH ALTROOT gwvm_zpool 3.35T 3.16T 190G94% ONLINE - rpool 135G 27.5G

Re: [zfs-discuss] Possible ZFS panic on Solaris 10 Update 6

2008-11-13 Thread Matthew Ahrens
Ian, I couldn't find any bugs with a similar stack trace. Can you file a bug? --matt Ian Collins wrote: The system was an x4540 running Solaris 10 Update 6 acting as a production Samba server. The only unusual activity was me sending and receiving incremental dumps to and from another

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-02 Thread Matthew Ahrens
David Magda wrote: Given the threads that have appeared on this list lately, how about codifying / standardizing the output of zfs send so that it can be backed up to tape? :) We will soon be changing the manpage to indicate that the zfs send stream will be receivable on all future versions

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-02 Thread Matthew Ahrens
Blake wrote: zfs send is great for moving a filesystem with lots of tiny files, since it just handles the blocks :) I'd like to see: pool-shrinking (and an option to shrink disk A when i want disk B to become a mirror, but A is a few blocks bigger) I'm working on it. install to mirror

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread Matthew Ahrens
Greg Mason wrote: Just my $0.02, but would pool shrinking be the same as vdev evacuation? Yes. basically, what I'm thinking is: zpool remove mypool list of devices/vdevs Allow time for ZFS to vacate the vdev(s), and then light up the OK to remove light on each evacuated disk. That's the

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Matthew Ahrens
Jorgen Lundman wrote: In the style of a discussion over a beverage, and talking about user-quotas on ZFS, I recently pondered a design for implementing user quotas on ZFS after having far too little sleep. It is probably nothing new, but I would be curious what you experts think of the

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Matthew Ahrens
Bob Friesenhahn wrote: On Thu, 12 Mar 2009, Jorgen Lundman wrote: User-land will then have a daemon, whether or not it is one daemon per file-system or really just one daemon does not matter. This process will open '/dev/quota' and empty the transaction log entries constantly. Take the

Re: [zfs-discuss] usedby* properties for datasets created before v13

2009-03-12 Thread Matthew Ahrens
Gavin Maltby wrote: Hi, The manpage says Specifically, used = usedbychildren + usedbydataset + usedbyrefreservation +, usedbysnapshots. These proper- ties are only available for datasets created on zpool version 13 pools. .. and I now realize that

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Matthew Ahrens
Jorgen Lundman wrote: Great! Will there be any particular limits on how many uids, or size of uids in your implementation? UFS generally does not, but I did note that if uid go over 1000 it flips out and changes the quotas file to 128GB in size. All UIDs, as well as SIDs (from the SMB

Re: [zfs-discuss] is 'zfs receive' atomic per snapshot?

2009-03-19 Thread Matthew Ahrens
José Gomes wrote: Can we assume that any snapshot listed by either 'zfs list -t snapshot' or 'ls .zfs/snapshot' and previously created with 'zfs receive' is complete and correct? Or is it possible for a 'zfs receive' command to fail (corrupt/truncated stream, sigpipe, etc...) and a corrupt or

[zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Matthew Ahrens
Microsystems 1. Introduction 1.1. Project/Component Working Name: ZFS user/group quotas space accounting 1.2. Name of Document Author/Supplier: Author: Matthew Ahrens 1.3 Date of This Document: 30 March, 2009 4. Technical Description ZFS user/group space

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Matthew Ahrens
Robert Milkowski wrote: Hello Matthew, Excellent news. Wouldn't it be better if logical disk usage would be accounted and not physical - I mean when compression is enabled should quota be accounted based by a logical file size or physical as in du? ] The compressed space *is* the amount of

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Matthew Ahrens
Nicolas Williams wrote: On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote: The user or group is specified using one of the following forms: posix name (eg. ahrens) posix numeric id (eg. 126829) sid name (eg. ahr...@sun) sid numeric id (eg. S-1-12345-12423-125829) How does this work

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Matthew Ahrens
Nicolas Williams wrote: We could also disallow them from doing zfs get useru...@name pool/zoned/fs, just make it an error to prevent them from seeing something other than what they intended. I don't see why the g-z admin should not get this data. They can of course still get the data by

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Matthew Ahrens
Tomas Ögren wrote: On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes: FYI, I filed this PSARC case yesterday, and expect to integrate into OpenSolaris in April. Your comments are welcome. http://arc.opensolaris.org/caselog/PSARC/2009/204/ Quota reporting over NFS or for userland

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Matthew Ahrens
River Tarnell wrote: Matthew Ahrens: ZFS user quotas (like other zfs properties) will not be accessible over NFS; you must be on the machine running zfs to manipulate them. does this mean that without an account on the NFS server, a user cannot see his current disk use / quota? That's

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Matthew Ahrens
Robert Milkowski wrote: Hello Matthew, Tuesday, March 31, 2009, 9:16:42 PM, you wrote: MA Robert Milkowski wrote: Hello Matthew, Excellent news. Wouldn't it be better if logical disk usage would be accounted and not physical - I mean when compression is enabled should quota be accounted

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Matthew Ahrens
Mike Gerdts wrote: On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens matthew.ahr...@sun.com wrote: River Tarnell wrote: Matthew Ahrens: ZFS user quotas (like other zfs properties) will not be accessible over NFS; you must be on the machine running zfs to manipulate them. does this mean

Re: [zfs-discuss] zfs send -R core dumps on SXCE 110

2009-04-02 Thread Matthew Ahrens
Enrico Maria Crisostomo wrote: # zfs send -R -I @20090329 mypool/m...@20090330 | zfs recv -F -d anotherpool/anotherfs I experienced core dumps and the error message was: internal error: Arg list too long Abort (core dumped) This is 6801979, fixed in build 111. --matt

Re: [zfs-discuss] zfs promote/destroy enhancements?

2009-04-23 Thread Matthew Ahrens
Ed, zfs destroy [-r] -p sounds great. I'm not a big fan of the -t template. Do you have conflicting snapshot names due to the way your (zones) software works, or are you concerned about sysadmins creating these conflicting snapshots? If it's the former, would it be possible to change the

Re: [zfs-discuss] Resilver Performance and Behavior

2009-04-30 Thread Matthew Ahrens
Paul Kraus wrote: Sorry in advance if this has already been discussed, but I did not find it in my archives of the list. According to the ZFS documentation, a resilver operation includes what is effectively a dirty region log (DRL) so that if the resilver is interrupted, by a snapshot

Re: [zfs-discuss] snapshot management issues

2009-05-09 Thread Matthew Ahrens
Edward Pilatowicz wrote: hey all, so recently i wrote some zones code to manage zones on zfs datasets. the code i wrote did things like rename snapshots and promote filesystems. while doing this work, i found a few zfs behaviours that, if changed, could greatly simplify my work. the primary

Re: [zfs-discuss] Much room for improvement for zfs destroy -r ...

2009-05-09 Thread Matthew Ahrens
Joep Vesseur wrote: I was wondering why zfs destroy -r is so excruciatingly slow compared to parallel destroys. This issue is bug # 6631178. The problem is that zfs destroy -r filesystem destroys each filesystem and snapshot individually, and each one must wait for a txg to sync (0.1 - 10

Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Matthew Ahrens
Jorgen Lundman wrote: I have been playing around with osol-nv-b114 version, and the ZFS user and group quotas. First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone else involved). Thanks for the feedback! I was unable to get ZFS quota to work with rquota. (Ie, NFS mount

Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-21 Thread Matthew Ahrens
Jorgen Lundman wrote: Oh I forgot the more important question. Importing all the user quota settings; Currently as a long file of zfs set commands, which is taking a really long time. For example, yesterday's import is still running. Are there bulk-import solutions? Like zfs set -f file.txt

Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Matthew Ahrens
Brian Kolaci wrote: So Sun would see increased hardware revenue stream if they would just listen to the customer... Without [pool shrink], they look for alternative hardware/software vendors. Just to be clear, Sun and the ZFS team are listening to customers on this issue. Pool shrink has

Re: [zfs-discuss] ZFS Recv slow with high CPU

2009-09-21 Thread Matthew Ahrens
Tristan Ball wrote: Hi Everyone, I have a couple of systems running opensolaris b118, one of which sends hourly snapshots to the other. This has been working well, however as of today, the receiving zfs process has started running extremely slowly, and is running at 100% CPU on one core,

Re: [zfs-discuss] ZFS Recv slow with high CPU

2009-09-22 Thread Matthew Ahrens
Tristan Ball wrote: OK, Thanks for that. From reading the RFE, it sound's like having a faster machine on the receive side will be enough to alleviate the problem in the short term? That's correct. --matt ___ zfs-discuss mailing list

Re: [zfs-discuss] Hot Space vs. hot spares

2009-09-30 Thread Matthew Ahrens
Brandon, Yes, this is something that should be possible once we have bp rewrite (the ability to move blocks around). One minor downside to hot space would be that it couldn't be shared among multiple pools the way that hot spares can. Also depending on the pool configuration, hot space may

Re: [zfs-discuss] Hot Space vs. hot spares

2009-09-30 Thread Matthew Ahrens
Erik Trimble wrote: From a global perspective, multi-disk parity (e.g. raidz2 or raidz3) is the way to go instead of hot spares. Hot spares are useful for adding protection to a number of vdevs, not a single vdev. Even when using raidz2 or 3, it is useful to have hot spares so that

Re: [zfs-discuss] Help! System panic when pool imported

2009-10-19 Thread Matthew Ahrens
Thanks for reporting this. I have fixed this bug (6822816) in build 127. Here is the evaluation from the bug report: The problem is that the clone's dsobj does not appear in the origin's ds_next_clones_obj. The bug can occur can occur under certain circumstances if there was a botched

Re: [zfs-discuss] ZFS and quota/refqoutoa question

2009-10-20 Thread Matthew Ahrens
Peter Wilk wrote: tank/appswill be mounted as /apps -- need to be set with 10G tank/apps/data1 will need to be mount as /apps/data1, need to be set with 20G alone. The question is: If refquota is being used to set the filesystem sizes on /apps and /apps/data1. /apps/data1 will not be

Re: [zfs-discuss] group and user quotas - a temporary hack?

2009-10-20 Thread Matthew Ahrens
Alastair Neil wrote: However, the user or group quota is applied when a clone or a snapshot is created from a file system that has a user or group quota. applied to a clone I understand what that means, applied to a snapshot - not so clear does it mean enforced on the original dataset?

Re: [zfs-discuss] ZFS user quota, userused updates?

2009-10-20 Thread Matthew Ahrens
The user/group used can be out of date by a few seconds, same as the used and referenced properties. You can run sync(1M) to wait for these values to be updated. However, that doesn't seem to be the problem you are encountering here. Can you send me the output of: zfs list zpool1/sd01_mail

Re: [zfs-discuss] group and user quotas - a temporary hack?

2009-10-20 Thread Matthew Ahrens
Alastair Neil wrote: On Tue, Oct 20, 2009 at 12:12 PM, Matthew Ahrens matthew.ahr...@sun.com mailto:matthew.ahr...@sun.com wrote: Alastair Neil wrote: However, the user or group quota is applied when a clone or a snapshot is created from a file system that has

Re: [zfs-discuss] ZFS user quota, userused updates?

2009-10-20 Thread Matthew Ahrens
Tomas Ögren wrote: On a related note, there is a way to still have quota used even after all files are removed, S10u8/SPARC: In this case there are two directories that have not actually been removed. They have been removed from the namespace, but they are still open, eg due to some

Re: [zfs-discuss] ZFS user quota, userused updates?

2009-10-20 Thread Matthew Ahrens
Tomas Ögren wrote: On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes: Tomas Ögren wrote: On a related note, there is a way to still have quota used even after all files are removed, S10u8/SPARC: In this case there are two directories that have not actually been removed. They have

[zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-23 Thread Matthew Ahrens
If you did not do zfs set dedup=fletcher4,verify fs (which is available in build 128 and nightly bits since then), you can ignore this message. We have changed the on-disk format of the pool when using dedup=fletcher4,verify with the integration of: 6903705 dedup=fletcher4,verify doesn't

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-23 Thread Matthew Ahrens
Andrew Gabriel wrote: Kjetil Torgrim Homme wrote: Daniel Carosone d...@geek.com.au writes: Would there be a way to avoid taking snapshots if they're going to be zero-sized? I don't think it is easy to do, the txg counter is on a pool level, AFAIK: # zdb -u spool Uberblock

Re: [zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-23 Thread Matthew Ahrens
functionality has been removed. We will investigate whether it's possible to fix these isses and re-enable this functionality. --matt Matthew Ahrens wrote: If you did not do zfs set dedup=fletcher4,verify fs (which is available in build 128 and nightly bits since then), you can ignore this message

Re: [zfs-discuss] Confusion regarding 'zfs send'

2009-12-10 Thread Matthew Ahrens
Brandon High wrote: I'm playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that's already stored to add compression and now dedup seems to be a send / receive pipe similar to:

Re: [zfs-discuss] quotas on zfs at solaris 10 update 9 (10/09)

2009-12-11 Thread Matthew Ahrens
Len Zaifman wrote: We have just update a major file server to solaris 10 update 9 so that we can control user and group disk usage on a single filesystem. We were using qfs and one nice thing about samquota was that it told you your soft limit, your hard limit and your usage on disk space and

Re: [zfs-discuss] directory size on compressed file system on Solaris 10

2009-12-21 Thread Matthew Ahrens
Gaëtan Lehmann wrote: Hi, On opensolaris, I use du with the -b option to get the uncompressed size of a directory): r...@opensolaris:~# du -sh /usr/local/ 399M/usr/local/ r...@opensolaris:~# du -sbh /usr/local/ 915M/usr/local/ r...@opensolaris:~# zfs list -o

Re: [zfs-discuss] Problems with send/receive

2010-01-19 Thread Matthew Ahrens
John Meyer wrote: Looks like this part got cut off somehow: the filesystem mount point is set to /usr/local/local. I just want to do a simple backup/restore, can anyone tell me something obvious that I'm not doing right? Using OpenSolaris development build 130. Sounds like bug 6916662,

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-19 Thread Matthew Ahrens
Michael Schuster wrote: Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Matthew Ahrens
This is RFE 6425091 want 'zfs diff' to list files that have changed between snapshots, which covers both file directory changes, and file removal/creation/renaming. We actually have a prototype of zfs diff. Hopefully someday we will finish it up... --matt Henu wrote: Hello Is there a

Re: [zfs-discuss] Dedup Questions.

2010-02-09 Thread Matthew Ahrens
Tom Hall wrote: Re the DDT, can someone outline it's structure please? Some sort of hash table? The blogs I have read so far dont specify. It is stored in a ZAP object, which is an extensible hash table. See zap.[ch], ddt_zap.c, ddt.h --matt ___

Re: [zfs-discuss] ZFS Group Quotas

2010-08-24 Thread Matthew Ahrens
Jordan Schwartz wrote: ZFSfolk, Pardon the slightly offtopic post, but I figured this would be a good forum to get some feedback. I am looking at implementing zfs group quotas on some X4540s and X4140/J4400s, 64GB of RAM per server, running Solaris 10 Update 8 servers with IDR143158-06. There

Re: [zfs-discuss] zfs send|recv and inherited recordsize

2010-10-04 Thread Matthew Ahrens
That's correct. This behavior is because the send|recv operates on the DMU objects, whereas the recordsize property is interpreted by the ZPL. The ZPL checks the recordsize property when a file grows. But the recv doesn't grow any files, it just dumps data into the underlying objects. --matt

Re: [zfs-discuss] possible zfs recv bug?

2010-11-23 Thread Matthew Ahrens
I verified that this bug exists in OpenSolaris as well. The problem is that we can't destroy the old filesystem a (which has been renamed to rec2/recv-2176-1 in this case). We can't destroy it because it has a child, b. We need to rename b to be under the new a. However, we are not renaming

Re: [zfs-discuss] possible zfs recv bug?

2010-12-02 Thread Matthew Ahrens
I verified that this bug exists in OpenSolaris as well. The problem is that we can't destroy the old filesystem a (which has been renamed to rec2/recv-2176-1 in this case). We can't destroy it because it has a child, b. We need to rename b to be under the new a. However, we are not renaming

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Matthew Ahrens
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson don.jack...@gmail.com wrote: # zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv  npool/openbsd receiving full stream of naspool/open...@xfer-11292010 into npool/open...@xfer-11292010 received 23.5GB stream in 883 seconds (27.3MB/sec)

Re: [zfs-discuss] snaps lost in space?

2010-12-08 Thread Matthew Ahrens
usedsnap is the amount of space consumed by all snapshots. Ie, the amount of space that would be recovered if all snapshots were to be deleted. The space used by any one snapshot is the space that would be recovered if that snapshot was deleted. Ie, the amount of space that is unique to that

Re: [zfs-discuss] ZFS send/receive while write is enabled on receive side?

2010-12-09 Thread Matthew Ahrens
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins i...@ianshome.com wrote:  On 12/10/10 12:31 PM, Moazam Raja wrote: So, is it OK to send/recv while having the receive volume write enabled? A write can fail if a filesystem is unmounted for update. True, but ZFS recv will not normally unmount a

Re: [zfs-discuss] Size of incremental stream

2011-01-11 Thread Matthew Ahrens
On Mon, Jan 10, 2011 at 2:40 PM, fred f...@mautadine.com wrote: Hello, I'm having a weird issue with my incremental setup. Here is the filesystem as it shows up with zfs list: NAME                                USED  AVAIL  REFER  MOUNTPOINT Data/FS1                           771M  16.1T  

Re: [zfs-discuss] Size of incremental stream

2011-01-13 Thread Matthew Ahrens
On Thu, Jan 13, 2011 at 4:36 AM, fred f...@mautadine.com wrote: Thanks for this explanation So there is no real way to estimate the size of the increment? Unfortunately not for now. Anyway, for this particular filesystem, i'll stick with rsync and yes, the difference was 50G! Why? I

<    1   2   3   4   >