[zfs-discuss] Stretched fishes? (fibre channel 7000 series)

2010-06-05 Thread Nils Goroll
Hi, apologies for posting a fishworks-related question here, but I don't know of a better place (please tell me if that exists). Can anyone say anything about (planned) options to stretch out a storage 7000 series cluster for longer distances than what eSAS allows (prefarrable for more than

[zfs-discuss] I/O errors after zfs promote back and forth

2010-01-08 Thread Nils Goroll
Hi, I have just observed the following issue and I would like to ask if it is already known: I'm using zones on ZFS filesystems which were cloned from a common template (which is itself an original filesystem). A couple of weeks ago, I did a pkg image-update, so all zone roots got cloned

Re: [zfs-discuss] I/O errors after zfs promote back and forth

2010-01-08 Thread Nils Goroll
BTW, this was on snv_111b - sorry I forgot to mention. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS dedup accounting

2009-11-03 Thread Nils Goroll
Hi Eric and all, Eric Schrock wrote: On Nov 3, 2009, at 6:01 AM, Jürgen Keil wrote: I think I'm observing the same (with changeset 10936) ... # mkfile 2g /var/tmp/tank.img # zpool create tank /var/tmp/tank.img # zfs set dedup=on tank # zfs create tank/foobar This has to do

Re: [zfs-discuss] ZFS dedup accounting reservations

2009-11-03 Thread Nils Goroll
Well, then you could have more logical space than physical space Reconsidering my own question again, it seems to me that the question of space management is probably more fundamental than I had initially thought, and I assume members of the core team will have thought through much of it. I

Re: [zfs-discuss] ZFS dedup accounting reservations

2009-11-03 Thread Nils Goroll
Hi David, simply can't stand up to reality. I kind of dislike the idea to talk about naiveness here. Maybe it was a poor choice of words; I mean something more along the lines of simplistic. The point is, space is no longer as simple a concept as it was 40 years ago. Even without

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Nils Goroll
Hi Adam, thank you for your precise statement. Be it only from an engineering standpoint, this is the kind of argumentation which I was expecting (and hoping for). I'm not sure what would lead you to believe that there is fork between the open source / OpenSolaris ZFS and what we have in

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-14 Thread Nils Goroll
Hi Bob, Regarding my bonus question: I haven't found yet a definite answer if there is a way to read the currently active controller setting. I still assume that the nvsram settings which can be read with service -d arrayname -c read -q nvsram region=0xf2 host=0x00 do not necessarily

[zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-13 Thread Nils Goroll
Hi, I am trying to find out some definite answers on what needs to be done on an STK 2540 to set the Ingnore Cache Sync Option. The best I could find is Bob's Sun StorageTek 2540 / ZFS Performance Summary (Dated Feb 28, 2008, thank you, Bob), in which he quotes a posting of Joel Miller: To

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-13 Thread Nils Goroll
Hi Bob and all, I should update this paper since the performance is now radically different and the StorageTek 2540 CAM configurables have changed. That would be great, I think you'd do the community (and Sun, probably) a big favor. Is this information still current for F/W 07.35.44.10 ?

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-13 Thread Nils Goroll
Hi Bob and all, So this sounds like we need to wait for someone to come with a definite answer. I've received some helpful information on this: Byte 17 is for Ignore Force Unit Access. Byte 18 is for Ignore Disable Write Cache. Byte 21 is for Ignore Cache Sync. Change ALL settings to 1

Re: [zfs-discuss] lots of zil_clean threads

2009-09-23 Thread Nils Goroll
I should add that I have quite a lot of datasets: and maybe I should also add that I'm still running an old zpool version in order to keep the ability to boot snv_98: aggis:~$ zpool upgrade This system is currently running ZFS pool version 14. The following pools are out of date, and can

Re: [zfs-discuss] lots of zil_clean threads

2009-09-22 Thread Nils Goroll
Hi Neil and all, thank you very much for looking into this: So I don't know what's going on. What is the typical call stack for those zil_clean() threads? I'd say they are all blocking on their respective CVs: ff0009066c60 fbc2c0300 0 60 ff01d25e1180 PC:

[zfs-discuss] lots of zil_clean threads

2009-09-21 Thread Nils Goroll
Hi All, out of curiosity: Can anyone come up with a good idea about why my snv_111 laptop computer should run more than 1000 zil_clean threads? ff0009a9dc60 fbc2c0300 tq:zil_clean ff0009aa3c60 fbc2c0300 tq:zil_clean ff0009aa9c60

[zfs-discuss] zpool hanging after I/O error (usb) on all mirror components

2009-09-12 Thread Nils Goroll
Hi, yesterday, my backup zpool on two usb drives failed for USB errors (I don't know if connecting my iPhone plays a role) while scrubbing the pool. This lead to all I/O on the zpool hanging, including df, zpool and zfs commands. init 6 would also hang due to bootadm hanging: process id

[zfs-discuss] zpool status showing wrong device name (similar to: ZFS confused about disk controller )

2009-08-02 Thread Nils Goroll
Hi All, over the last couple of weeks, I had to boot from my rpool from various physical machines because some component on my laptop mainboard blew up (you know that burned electronics smell?). I can't retrospectively document all I did, but I am sure I recreated the boot-archive, ran

Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-07-08 Thread Nils Goroll
Hi Miles and All, this is off-topic, but as the discussion has started here: Finally, *ALL THIS IS COMPLETELY USELESS FOR NFS* because L4 hashing can only split up separate TCP flows. The reason why I have spend some time with

[zfs-discuss] Purpose of zfs_acl_split_ace

2009-06-24 Thread Nils Goroll
Hi, in nfs-discuss, Andrwe Watkins has brought up the question, why an inheritable ACE is split into two ACEs when a descendant directory is created. Ref: http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_acl.c#1506 I must admit that I had observed this

Re: [zfs-discuss] Need Help Invalidating Uberblock

2008-12-16 Thread Nils Goroll
Well done, Nathan, thank you taking on the additional effort to write it all up. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-27 Thread Nils Goroll
Hi Eric and all, Can anyone point me in the right direction here? Much appreciated! I have worked on a similar issue this week. Though I have not worked through all the information you have provided, could you please try the settings and source code changes I posted here:

Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-27 Thread Nils Goroll
If you run the id username on the box, does it show the users secondary groups? id never shows secondary groups. Use id -a Nils ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [osol-bugs] what's the story wtih bug #6592835?

2008-10-29 Thread Nils Goroll
Hi Graham, (this message was posed on opensolaris-bugs initially, I am CC'ing and reply-to'ing zfs-discuss as it seems to be a more appropriate place to discuss this.) I'm surprised to see that the status of bug 6592835 hasn't moved beyond yes that's a problem. My understanding is that the

Re: [zfs-discuss] Weird ZFS recv / NFS export problem

2008-10-01 Thread Nils Goroll
Jürgen, In a snoop I see that, when the access(2) fails, the nfsclient gets a Stale NFS file handle response, which gets translated to an ENOENT. What happens if you use the noac NFS mount option on the client? I'd not recommend to use it for production environments unless you really need

Re: [zfs-discuss] Automatic removal of old snapshots

2008-09-25 Thread Nils Goroll
Before re-inventing the wheel, does anyone have any nice shell script to do this kind of thing (to be executed from cron)? http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10 http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11 ___

Re: [zfs-discuss] zfs-auto-snapshot default schedules

2008-09-25 Thread Nils Goroll
Tim, - Frequent snapshots, taken every 15 minutes, keeping the 4 most recent - Hourly snapshots taken once every hour, keeping 24 - Daily snapshots taken once every 24 hours, keeping 7 - Weekly snapshots taken once every 7 days, keeping 4 - Monthly snapshots taken on the first day of

Re: [zfs-discuss] Automatic removal of old snapshots

2008-09-25 Thread Nils Goroll
Wade, that order. Also I guess user case in my mind would leave a desktop user more likely to need access to a few minutes, hours or days ago then 12 months ago. You are guessing that, but I am a desktop user who'd rather like the contrary. I think Tim has already stated that he would not

Re: [zfs-discuss] Disk Concatenation

2008-09-23 Thread Nils Goroll
Hi Darren, http://www.opensolaris.org/jive/thread.jspa?messageID=271983#271983 The case mentioned there is one where concatenation in zdevs would be useful. That case appears to be about trying to get a raidz sized properly against disks of different sizes. I don't see a similar issue

Re: [zfs-discuss] Disk Concatenation

2008-09-22 Thread Nils Goroll
See http://www.opensolaris.org/jive/thread.jspa?messageID=271983#271983 The case mentioned there is one where concatenation in zdevs would be useful. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Procedure to follow after zpool upgrade on rpool

2008-09-19 Thread Nils Goroll
Hi Pablo, Why is needed this step (the touch one) ? # make bootadm re-create archive bootadm update-archive /boot/solaris/bin/update_grub This is just an easy way to make sure bootadm will write new archive files. You could also use rm /platform/i86pc/amd64/boot_archive \

Re: [zfs-discuss] RAIDZ read-optimized write?

2008-09-19 Thread Nils Goroll
Hi Richard, Someone in the community was supposedly working on this, at one time. It gets brought up about every 4-5 months or so. Lots of detail in the archives. Thank you for the pointer and sorry for the noise. I will definitely browse the archives to find out more regarding this

Re: [zfs-discuss] [storage-discuss] A few questions : RAID set width

2008-09-18 Thread Nils Goroll
Hi all, Ben Rockwood wrote: You want to keep stripes wide to reduce wasted disk space but you also want to keep them narrow to reduce the elements involved in parity calculation. I Ben's argument, and the main point IMHO is how the RAID behaves in the degraded state. When a disk fails,

Re: [zfs-discuss] [storage-discuss] A few questions - small read I/O performance on RAIDZ

2008-09-18 Thread Nils Goroll
Hi Peter, Sorry, I have read you post after posting a reply myself. Peter Tribble wrote: No. The number of spindles is constant. The snag is that for random reads, the performance of a raidz1/2 vdev is essentially that of a single disk. (The writes are fast because they're always full-stripe;

[zfs-discuss] typo: [storage-discuss] A few questions : RAID set width

2008-09-18 Thread Nils Goroll
I Ben's argument, and the main point IMHO is how the RAID behaves in the ^ second ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] RAIDZ read-optimized write?

2008-09-18 Thread Nils Goroll
Hi Robert, Basically, the way RAID-Z works is that it spreads FS block to all disks in a given VDEV, minus parity/checksum disks). Because when you read data back from zfs before it gets to application zfs will check it's checksum (fs checksum, not a raid-z one) so it needs entire fs

[zfs-discuss] Procedure to follow after zpool upgrade on rpool (was: zpool upgrade wrecked GRUB)

2008-09-18 Thread Nils Goroll
(not sure if this has already been answered) I have a similar situation and would love some concise suggestions: Had a working version of 2008.05 running svn_93 with the updated grub. I did a pkg-update to svn_95 and ran the zfs update when it was suggested. System ran fine until I did a

Re: [zfs-discuss] Procedure to follow after zpool upgrade on rpool

2008-09-18 Thread Nils Goroll
Not knowing of a better place to put this, I have created http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB Please make any corrections there. Thanks, Nils ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-09-18 Thread Nils Goroll
Hi, It is important to remember that ZFS is ideal for writing new files from scratch. IIRC, maildir MTAs never overwrite mail files. But courier-imap does maintain some additional index files which will be overwritten and I guess other IMAP servers will probably do the same. Nils

Re: [zfs-discuss] Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines

2008-08-26 Thread Nils Goroll
Hi David, have you tried mounting and re-mounting all filesystems which are not being mounted automatically? See other posts to zfs-discuss. Nils ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines

2008-08-26 Thread Nils Goroll
glitch: have you tried mounting and re-mounting all filesystems which are not ^^^ unmounting ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Can ZFS delete snapshots automatically?

2008-08-24 Thread Nils Goroll
zfs itself can't, but Tim Foster has written a nice script, integrated into SMF, which can be used to automatically create and delete snapshots at various intervals. see http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10 for the latest release and

Re: [zfs-discuss] Possible to do a stripe vdev?

2008-08-22 Thread Nils Goroll
Hi, John wrote: I'm setting up a ZFS fileserver using a bunch of spare drives. I'd like some redundancy and to maximize disk usage, so my plan was to use raid-z. The problem is that the drives are considerably mismatched and I haven't found documentation (though I don't see why it

[zfs-discuss] zpool detach from degraded mirror : why only applicable to mirror ... ?

2008-08-15 Thread Nils Goroll
Hi, I thought that this question must have been answered already, but I have not found any explanations. I'm sorry in advance if this is redundant, but: Why exactly doesn't ZFS let me detach a device from a degraded mirror? haggis:~# zpool status pool: rmirror state: DEGRADED status: One or

Re: [zfs-discuss] zpool detach from degraded mirror : why only applicable to mirror ..

2008-08-15 Thread Nils Goroll
Matthias, that does not answer my question. The question is: Why can't I decide that I consciously want to destroy the (two way) mirror (and, yes, do away with any redundancy). Nils This message posted from opensolaris.org ___ zfs-discuss mailing

[zfs-discuss] Oops: zpool detach from degraded mirror only applicable to mirror ..

2008-08-15 Thread Nils Goroll
Hi all, especially Matthias, I am very sorry for having bothered you with this stupid question, I am embarrassed by the fact that I did not realize it's not a mirror. The fact that I named it rmirror definitely added confusion on my side. Apologies in particular for not having taken Mathias'

[zfs-discuss] cron and roles (zfs-auto-snapshot 0.11 work)

2008-08-03 Thread Nils Goroll
My previous reply via email did not get linked to this post, so let me resend it: can roles run cron jobs ?), No. You need a user who can take on the role. Darn, back to the drawing board. I don't have all the context on this but Solaris RBAC roles *can* run cron jobs. Roles don't have to

[zfs-discuss] zfs-auto-snapshot: Use at ? SMF prop caching?

2008-08-03 Thread Nils Goroll
Hi Tim, So, I've got a pretty basic solution: Every time the service starts, we check for the existence of a snapshot [...] - if one doesn't exist, then we take a snapshot under the policy set down by that instance. This does sound like a valid alternative solution for this requirement if

Re: [zfs-discuss] zfs-auto-snapshot 0.11 work (was Re: zfs-auto-snapshot with at schedul

2008-07-31 Thread Nils Goroll
Hi Tim, Finally getting around to answering Nil's mail properly - only a month late! Not a problem. Okay, after careful consideration, I don't think I'm going to add this that's fine for me, but ... but in cases where you're powering down a laptop overnight, you don't want to just take a

[zfs-discuss] zfs-auto-snapshot at jobs: fix for README exable

2008-06-29 Thread Nils Goroll
An example from the readme does not work and fails with: Error: Cant schedule at job: at midnight sun Change: --- README.zfs-auto-snapshot.txt.o Sun Jun 29 11:23:35 2008 +++ README.zfs-auto-snapshot.txtSun Jun 29 11:24:31 2008 @@ -171,7 +171,7 @@ 'setprop zfs/at_timespec =

Re: [zfs-discuss] Oops: zfs-auto-snapshot with at scheduling

2008-06-26 Thread Nils Goroll
Hi all, I'll attach a new version zfs-auto-snapshot including some more improvements, and probably some new bugs. Seriously, I have tested it, but certainly not all functionality, so please let me know about any (new) problems you come across. Except from the change log: - Added support to

[zfs-discuss] How about an zfs-auto-snapshot project

2008-06-26 Thread Nils Goroll
And how about making this an official project? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs-auto-snapshot with at scheduling

2008-06-24 Thread Nils Goroll
and the tar file ... This message posted from opensolaris.org zfs-auto-snapshot-0.10_atjobs.tar.bz2 Description: BZip2 compressed data ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send/receive issue

2008-06-11 Thread Nils Goroll
see: http://bugs.opensolaris.org/view_bug.do?bug_id=6700597 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss