Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Stefan Ring
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap support (I believe currently only Nexenta but correct me if I am wrong) the blocks will not be freed, will they? Solaris 11.1 has ZFS with SCSI UNMAP support. Freeing unused blocks works perfectly well with fstrim

Re: [zfs-discuss] Question about ZFS snapshots

2012-09-20 Thread Stefan Ring
On Fri, Sep 21, 2012 at 6:31 AM, andy thomas a...@time-domain.co.uk wrote: I have a ZFS filseystem and create weekly snapshots over a period of 5 weeks called week01, week02, week03, week04 and week05 respectively. Ny question is: how do the snapshots relate to each other - does week03 contain

Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Stefan Ring
I asked what I thought was a simple question but most of the answers don't have too much to do with the question. Hehe, welcome to mailing lists ;). What I'd really like is an option (maybe it exists) in ZFS to say when a block fails a checksum tell me which file it affects It does exactly

Re: [zfs-discuss] ZFS snapshot used space question

2012-08-29 Thread Stefan Ring
On Wed, Aug 29, 2012 at 8:58 PM, Timothy Coalson tsc...@mst.edu wrote: As I understand it, the used space of a snapshot does not include anything that is in more than one snapshot. True. It shows the amount that would be freed if you destroyed the snapshot right away. Data held onto by more

Re: [zfs-discuss] Missing Disk Space

2012-08-06 Thread Stefan Ring
Have you not seen my answer? http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052170.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Stefan Ring
Unfortunately, the Intel 520 does *not* power protect it's on-board volatile cache (unlike the Intel 320/710 SSD). Intel has an eye-opening technology brief, describing the benefits of power-loss data protection at:

Re: [zfs-discuss] Missing disk space

2012-08-04 Thread Stefan Ring
On Sat, Aug 4, 2012 at 12:00 AM, Burt Hailey bhai...@triunesystems.com wrote: We do hourly snapshots. Two days ago I deleted 100GB of data and did not see a corresponding increase in snapshot sizes. I’m new to zfs and am reading the zfs admin handbook but I wanted to post this to get some

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-16 Thread Stefan Ring
2) in the mirror case the write speed is cut by half, and the read speed is the same as a single disk. I'd expect about twice the performance for both reading and writing, maybe a bit less, but definitely more than measured. I wouldn't expect mirrored read to be faster than single-disk read,

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-16 Thread Stefan Ring
It is normal for reads from mirrors to be faster than for a single disk because reads can be scheduled from either disk, with different I/Os being handled in parallel. That assumes that there *are* outstanding requests to be scheduled in parallel, which would only happen with multiple readers

Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files

2012-07-05 Thread Stefan Ring
Actually, a write to memory for a memory mapped file is more similar to write(2). If two programs have the same file mapped then the effect on the memory they share is instantaneous because it is the same physical memory. A mmapped file becomes shared memory as soon as it is mapped at least

Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files

2012-07-04 Thread Stefan Ring
I really makes no sense at all to have munmap(2) not imply msync(3C). Why not? munmap(2) does basically the equivalent of write(2). In the case of write, that is: a later read from the same location will see the written data, unless another write happens in-between. If power goes down following

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-16 Thread Stefan Ring
when you say remove the device, I assume you mean simply make it unavailable for import (I can't remove it from the vdev). Yes, that's what I meant. root@openindiana-01:/mnt# zpool import -d /dev/lofi  pool: ZP-8T-RZ1-01    id: 9952605666247778346  state: FAULTED status: One or more

Re: [zfs-discuss] snapshot size

2012-06-05 Thread Stefan Ring
Two questions from a newbie.        1/ What REFER mean in zfs list ? The amount of data that is reachable from the file system root. It's just what I would call the contents of the file system.        2/ How can I known the size of all snapshot size for a partition ?        (OK I can add

Re: [zfs-discuss] snapshot size

2012-06-05 Thread Stefan Ring
Can I say        USED-REFER=snapshot size ? No. USED is the space that would be freed if you destroyed the snapshot _right now_. This can change (and usually does) if you destroy previous snapshots. ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS on Linux vs FreeBSD

2012-04-25 Thread Stefan Ring
I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers for an application project, the killer  was stat times (find running slow etc.), perhaps more layer 2 cache could have solved the problem, but it was easier to deploy ext/lvm2. But stat times (think directory

[zfs-discuss] What is your data error rate?

2012-01-24 Thread Stefan Ring
After having read this mailing list for a little while, I get the impression that there are at least some people who regularly experience on-disk corruption that ZFS should be able to report and handle. I’ve been running a raidz1 on three 1TB consumer disks for approx. 2 years now (about 90%

Re: [zfs-discuss] Data loss by memory corruption?

2012-01-17 Thread Stefan Ring
The issue is definitely not specific to ZFS.  For example, the whole OS depends on relable memory content in order to function.  Likewise, no one likes it if characters mysteriously change in their word processing documents. I don’t care too much if a single document gets corrupted – there’ll

[zfs-discuss] Data loss by memory corruption?

2012-01-14 Thread Stefan Ring
Inspired by the paper End-to-end Data Integrity for File Systems: A ZFS Case Study [1], I've been thinking if it is possible to devise a way, in which a minimal in-memory data corruption would cause massive data loss. I could imagine a scenario where an entire directory branch drops off the tree