Re: [zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-10 Thread Karl Wagner
On Sun, January 8, 2012 00:28, Bob Friesenhahn wrote: > > I think that I would also be interested in a system which uses the > so-called spare disks for more protective redundancy but then reduces > that protective redundancy in order to use that disk to replace a > failed disk or to automatically

[zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Karl Wagner
Hi everyone, I have a couple of questions regarding FreeBSD's ZFS support. Firstly, I believe it currently stands at zpool v28. Is this correct? Will this be updated any time soon? Also, looking at the Wikipedia page, the updates beyond this are: 29 Solaris Nevada b148 RAID-Z/mirror

Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Karl Wagner
On 2012-08-09 11:35, Jim Klimov wrote: 2012-08-09 13:57, Karl Wagner wrote: Hi everyone, I have a couple of questions regarding FreeBSD's ZFS support. Firstly, I believe it currently stands at zpool v28. Is this correct? Will this be updated any time soon? Also, looking at the Wiki

Re: [zfs-discuss] Dedicated metadata devices

2012-08-28 Thread Karl Wagner
On 2012-08-24 14:39, Jim Klimov wrote: Hello all, The idea of dedicated metadata devices (likely SSDs) for ZFS has been generically discussed a number of times on this list, but I don't think I've seen a final proposal that someone would take up for implementation (as a public source code, at

Re: [zfs-discuss] scripting incremental replication data streams

2012-09-19 Thread Karl Wagner
Hi Edward, My own personal view on this is that the simplest option is the best. In your script, create a new snapshot using one of 2 names. Let's call them SNAPSEND_A and SNAPSEND_B. You can decide which one by checking which currently exists. As manual setup, on the first run, create SNAPS

Re: [zfs-discuss] zfs send to older version

2012-10-23 Thread Karl Wagner
Actually, I think there is a world of difference. Backwards compatibility is something we all need. We need to be able to access content created in previous versions of software in newer versions. You cannot expect an older version to be compatible with the new features in a later version. T

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Karl Wagner
I can only speak anecdotally, but I believe it does. Watching zpool iostat it does read all data on both disks in a mirrored pair. Logically, it would not make sense not to verify all redundant data. The point of a scrub is to ensure all data is correct. On 2012-10-25 10:25, Jim Klimov wrot

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-26 Thread Karl Wagner
Does it not store a separate checksum for a parity block? If so, it should not even need to recalculate the parity: assuming checksums match for all data and parity blocks, the data is good. I could understand why it would not store a checksum for a parity block. It is not really necessary: Pa

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Karl Wagner
On 2012-11-08 4:43, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: > When I said performance was abysmal, I meant, if you dig right down and pressure the system for throughput to disk, you've got a Linux or Windows VM isnide of ESX, which is writing to a virtual disk, which ES

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Karl Wagner
On 2012-11-08 17:49, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: >> From: zfs-discuss-boun...@opensolaris.org [1] [mailto:zfs-discuss- boun...@opensolaris.org [2]] On Behalf Of Karl Wagner I am just wondering why you export the ZFS system through NFS? I have ha

Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Karl Wagner
On 2012-11-13 17:42, Peter Tribble wrote: > Given storage provisioned off a SAN (I know, but sometimes that's > what you have to work with), what's the best way to expand a pool? > > Specifically, I can either grow existing LUNs, a]or add new LUNs. > > As an example, If I have 24x 2TB LUNs,

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Karl Wagner
On 2012-11-14 12:55, dswa...@druber.com wrote: >> On 11/14/12 15:20, Dan Swartzendruber wrote: >> >>> Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome desktop working on bare-metal OI, followed by virtualbox. Supposedly that works in

[zfs-discuss] Scrub performance

2013-02-04 Thread Karl Wagner
Hi all I have had a ZFS file server for a while now. I recently upgraded it, giving it 16GB RAM and an SSD for L2ARC. This allowed me to evaluate dedupe on certain datasets, which worked pretty well. The main reason for the upgrade was that something wasn't working quite right, and I was get

Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Karl Wagner
OK then, I guess my next question would be what's the best way to "undedupe" the data I have? Would it work for me to zfs send/receive on the same pool (with dedup off), deleting the old datasets once they have been 'copied'? I think I remember reading somewhere that the DDT never shrinks, so

[zfs-discuss] HELP! RPool problem

2013-02-16 Thread Karl Wagner
I have a small problem. I have a development fileserver box running Solaris 11 Express. The Rpool is mirrored between an SSD and a hard drive. Today, the SSD deveoped a fault for some reason. While trying to diagnose the problem, the system panicked and rebooted. The SSD was the first boot drive,

[zfs-discuss] ZFS root backup/"disaster" recovery, and moving root pool

2011-01-10 Thread Karl Wagner
Hi everyone I am currently testing Solaris 11 Express. I currently have a root pool on a mirrored pair of small disks, and a data pool consisting of 2 mirrored pairs of 1.5TB drives. I have enabled auto snapshots on my root pool, and plan to archive the daily snapshots onto my data pool. I

[zfs-discuss] Request for comments: L2ARC, ZIL, RAM, and slow storage

2011-01-18 Thread Karl Wagner
Hi all This is just an off-the-cuff idea at the moment, but I would like to sound it out. Consider the situation where someone has a large amount of off-site data storage (of the order of 100s of TB or more). They have a slow network link to this storage. My idea is that this could be used to bu

Re: [zfs-discuss] Request for comments: L2ARC, ZIL, RAM, and slow storage

2011-01-19 Thread Karl Wagner
> -Original Message- > From: Edward Ned Harvey > [mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] > Sent: 19 January 2011 01:42 > To: 'Karl Wagner'; zfs-discuss@opensolaris.org > Subject: RE: [zfs-discuss] Request for comments: L2ARC, ZIL, RAM, an

Re: [zfs-discuss] ZFS snapshot query

2011-01-21 Thread Karl Wagner
Hi Looks like you missed the receive zfs send emcpool1/... | ssh 10.63.25.218 zfs receive rpool/... Or else it was a type on the message. Rgds Karl > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

Re: [zfs-discuss] ZFS and Virtual Disks

2011-02-15 Thread Karl Wagner
Hi I am no expert, but I have used several virtualisation environments, and I am always in favour of passing iSCSI straight through to the VM. It creates a much more portable system, often able to be booted on a different virtualisation environment, or even on a dedicated server, if you choose at

Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Karl Wagner
>From what I have read, this is not the best way to do it. Your best bet is to create a ZFS pool using the external device (or even better, devices) then zfs send | zfs receive. You can then do the same at your remote location. If you just send to a file, you may find it was a wasted trip (or pos

[zfs-discuss] FW: Solaris panic

2011-03-17 Thread Karl Wagner
Hi all I have only just seen this, and thought someone may be able to help. On heavy IO activity, my Solaris 11 Express box hosting a ZFS data pool crashes. It seems to show page faults in several things, including nfsd, sched, zpool-tank and automountd. I get the following in the logs: Mar 17

[zfs-discuss] Solaris panic

2011-03-18 Thread Karl Wagner
Hi all I have only just seen this, and thought someone may be able to help. On heavy IO activity, my Solaris 11 Express box hosting a ZFS data pool crashes. It seems to show page faults in several things, including nfsd, sched, zpool-tank and automountd. I get the following in the logs: Mar 17

Re: [zfs-discuss] Solaris panic

2011-03-18 Thread Karl Wagner
Please ignore. This was sent from the wrong account, and another copy was sent from the correct one. Sorry > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Karl Wagner > Sent: 17 March 2011 15:

[zfs-discuss] Read-only vdev

2011-04-08 Thread Karl Wagner
Hi everyone. I was just wondering if there was a way to for a specific vdev in a pool to be read-only? I can think of several uses for this, but would need to know if it was possible before thinking them through properly. Cheers Mouse ___ zf

Re: [zfs-discuss] Read-only vdev

2011-04-08 Thread Karl Wagner
> -Original Message- > From: Tomas Ă–gren [mailto:st...@acc.umu.se] > Sent: 08 April 2011 11:23 > To: Karl Wagner > Cc: zfs-discuss@opensolaris.org > Subject: Re: [zfs-discuss] Read-only vdev > > On 08 April, 2011 - Karl Wagner sent me these 3,5K b

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Karl Wagner
so there's an ARC entry referencing each individual DDT entry in the L2ARC?! I had made the assumption that DDT entries would be grouped into at least minimum block sized groups (8k?), which would have lead to a much more reasonable ARC requirement. seems like a bad design to me, which leads to

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Karl Wagner
I have to agree. ZFS needs a more intelligent scrub/resilver algorithm, which can 'sequentialise' the process. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. Giovanni Tirloni wrote: On Mon, May 16, 2011 at 9:02 AM, Sandon Van Ness wrote: Actually I have seen resilve