Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
The guide is good, but didn't tell me anything I didn't already know about this area unfortunately. Anyway, I freed up a big chunk of space by first deleting the snapshot which was reported by zfs list as being the largest (2GB). Doing zfs list after this deletion revealed that several of the

Re: [zfs-discuss] ZFS ate my RAID-10 data

2009-08-18 Thread Ross
I'm no expert, but it sounds like this: http://opensolaris.org/jive/thread.jspa?threadID=80232 Can you remove the faulted disk? I found this as well, but I don't think I'd be too comfortable using zpool destroy as a recovery tool... http://forums.sun.com/thread.jspa?threadID=5259623 It also

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Fajar A. Nugraha
On Tue, Aug 18, 2009 at 2:37 PM, Matthew Stevensonno-re...@opensolaris.org wrote: So there must be basically lots of references to data that hide themselves from the surface and can't really be found using zfs list. zfs list -t all usually works for me. Look at USED and REFER My understanding

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
Hi, thanks for the info. Can you have a look at the attachment on the original post for me? Everything you said is what I expected to see in the output there, but a lot of the values are blank where I hoped they would at least be able to tell me a breakdown of the USEDSNAP figure As far as I

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Fajar A. Nugraha
On Tue, Aug 18, 2009 at 4:09 PM, Matthew Stevensonno-re...@opensolaris.org wrote: Hi, thanks for the info. Can you have a look at the attachment on the original post for me? Everything you said is what I expected to see in the output there, but a lot of the values are blank where I hoped

Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Darren J Moffat
Garrett D'Amore wrote: Darren J Moffat wrote: Dataset rename restrictions --- On rename a dataset can non be moved out of its wrapping key hierarchy ie where it inherits the keysource property from. This is best explained by example: # zfs get -r keysource tank

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
Well, I see USEDSNAP 13.8 GB for the dataset, so if you delete ALL snapshots you'd probably be able to get that much. I agree, it's just hard to see how... As for which snapshot to delete to get most space, that's a liitle bit tricky. See

Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Brian Hechinger
On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote: Hi Darren, Thank you for the update. Have you got any ETA (build number) for the crypto project? Also, is there any word on if this will support the hardware crypto stuff in the VIA CPUs natively? That would be nice. :)

Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Constantin Gonzalez
Hi, Brian Hechinger wrote: On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote: Hi Darren, Thank you for the update. Have you got any ETA (build number) for the crypto project? Also, is there any word on if this will support the hardware crypto stuff in the VIA CPUs natively?

[zfs-discuss] ETA for 6574286 removing a slog doesn't work?

2009-08-18 Thread Roman Naumenko
Does anybody aware if this bug is going to be fixed in nearest future? IBM just started to sale new X25 model for half a price. -- Roman -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-18 Thread Darren J Moffat
Brian Hechinger wrote: On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote: Hi Darren, Thank you for the update. Have you got any ETA (build number) for the crypto project? Also, is there any word on if this will support the hardware crypto stuff That has always been the plan

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Thomas Burgess
it's pretty simple, if i understand it correctly. When you add some blocks to zfs... xxx then take a snapshot (snapshot of x) the disk has the space of the x's and the snapshot does't take up any space yet then you add more to the drive and maybe take another snapshot

Re: [zfs-discuss] zfs fragmentation

2009-08-18 Thread Mertol Ozyoney
There are Works to make NDMP more efficient in highly fregmanted file Systems with a lot of small files. I am not a development engineer so I don't know much and I do not think that there is any committed work. However ZFS engineers on the forum may comment more Mertol Mertol Ozyoney Storage

Re: [zfs-discuss] ZFS nfs performance on ESX4i

2009-08-18 Thread Mertol Ozyoney
Hi Ashley; RaidZ Group is Ok for throughput but due to the design whole RaidZ Group behavies like a single disk so your max IOPS is around 100. I'd personaly use Raid10 instead. Also you seem to have no write cache which can effect performance. Try using a log device Best regards

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
I do understand these concepts, but to me that still doesn't explain why adding the size of each snapshot together doesn't equal the size reported by zfs list in USEDSNAP. I'm clearly missing something. Hmmm... -- This message posted from opensolaris.org

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Richard Elling
On Aug 18, 2009, at 9:04 AM, Matthew Stevenson wrote: I do understand these concepts, but to me that still doesn't explain why adding the size of each snapshot together doesn't equal the size reported by zfs list in USEDSNAP. Here is the pertinent text from the ZFS Admin Guide.

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Thomas Burgess
dude, i just explained it =) ok...let me see if i can do better... if you have a file that's 1 gb , in zfs you have those blocks added. on a normal filesystem when you edit the file or add to it, it will erase the old file and add a new one over it (more or less). on zfs, you have the blocks

[zfs-discuss] Is it possible to replicate an entire zpool with AVS?

2009-08-18 Thread Paul Choi
Hello, Is it possible to replicate an entire zpool with AVS? From what I see, you can replicate a zvol, because AVS is filesystem agnostic. I can create zvols within a pool, and AVS can replicate replicate those, but that's not really what I want. If I create a zpool called disk1,

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Matthew Stevenson
Ha ha, I know! Like I say, I do get COW principles! I guess what I'm after is for someone to look at my specific example (in txt file attached to first post) and tell me specifically how to find out where the 13.8GB number is coming from. I feel like a total numpty for going on about this, I

Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-18 Thread Thomas Burgess
If you understand how copy on write works and how snapshots work then the concept of the extra space should make perfect since. If you want a mathmatic formula for how to figure it out i would have to say that it would be based on how DIFFERENT the data is between snapshots AND how MUCH data it

Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-18 Thread Chris Murray
I don't have quotas set, so I think I'll have to put this down to some sort of bug. I'm on SXCE 105 at the minute, ZFS version is 3, but zpool is version 13 (could be 14 if I upgrade). I don't have everything backed-up so won't do a zpool upgrade just at the minute. I think when SXCE 120 is

Re: [zfs-discuss] data disappear

2009-08-18 Thread Rafal Ciepiela
Bingo! After several updates I have many boot environments. Thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-18 Thread Nicolas Williams
Perhaps an open 14GB, zero-link file? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs send speed

2009-08-18 Thread Paul Kraus
Posted from the wrong address the first time, sorry. Is the speed of a 'zfs send' dependant on file size / number of files ?        We have a system with some large datasets (3.3 TB and about 35 million files) and conventional backups take a long time (using Netbackup 6.5 a FULL takes between

Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Joseph L. Casale
Is the speed of a 'zfs send' dependant on file size / number of files ? I am going to say no, I have *far* inferior iron that I am running a backup rig on, and doing a send/recv over ssh through gige and last night's replication gave the following: received 40.2GB stream in 3498 seconds

Re: [zfs-discuss] zfs incremental send stream size

2009-08-18 Thread michael
Is there perhaps a workaround for this? A way to condense the free blocks information? If not, any idea when an improvement might be implemented? We are currently suffering from incremental snapshots that refer to zero new blocks, but where incremental snapshots required over a gigabyte

[zfs-discuss] Behind the scenes of 'invalid vdev configuration'

2009-08-18 Thread Galen
I am dealing with a zpool that's refusing to import, and reporting invalid vdev configuration. How can I learn more about what exactly this means? Can I isolate which disk(s) are missing or corrupted/failing? zpool import provides some information, but not enough. Confusingly, it lists

Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Mattias Pantzare
On Tue, Aug 18, 2009 at 22:22, Paul Krauspk1...@gmail.com wrote: Posted from the wrong address the first time, sorry. Is the speed of a 'zfs send' dependant on file size / number of files ?        We have a system with some large datasets (3.3 TB and about 35 million files) and conventional