Re: [zfs-discuss] ZFS - how to determine which physical drive to replace
On Sat, Dec 12, 2009 at 8:17 AM, Paul Bruce p...@cais.com.au wrote: Hi, I'm just about to build a ZFS system as a home file server in raidz, but I have one question - pre-empting the need to replace one of the drives if it ever fails. How on earth do you determine the actual physical drive that has failed ? I've got the while zpool status thing worked out, but how do I translate the c1t0d0, c1t0d1 etc.. to a real physical driver. I can just see myself looking at the 6 drives, and thinking . c1t0d1 i think that's *this* one.. einee menee minee moe P As suggested at http://opensolaris.org/jive/thread.jspa?messageID=416264, you can try viewing the disk serial numbers with cfgadm: cfgadm -al -s select=type(disk),cols=ap_id:info You may need to power down the system to view the serial numbers printed on the disks to match them up, but it beats guessing. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Pool resize
On Mon, Dec 7, 2009 at 3:41 PM, Alexandru Pirvulescu sigx...@gmail.com wrote: I've read before regarding zpool size increase by replacing the vdevs. The initial pool was a raidz2 with 4 640GB disks. I've replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 The problem is the zpool size is the same (2.33TB raw) as seen below: # zpool list bigpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT bigpool 2.33T 1.41T 942G 60% 1.00x ONLINE - It should be ~ 3.8-3.9 TB, right? An autoexpand property was added a few months ago for zpools. This needs to be turned on to enable the automatic vdev expansion. For example, # zpool set autoexpand=on bigpool Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Accidentally added disk instead of attaching
On Mon, Dec 7, 2009 at 12:42 PM, Cindy Swearingen cindy.swearin...@sun.com wrote: I agree that zpool attach and add look similar in their syntax, but if you attempt to add a disk to a redundant config, you'll see an error message similar to the following: # zpool status export pool: export state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM export ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 errors: No known data errors # zpool add export c1t6d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses mirror and new vdev is disk Doesn't the mismatched replication message help? When adding a disk to a single-disk pool, this message isn't given and the add proceeds without any warning and without the need to force it: # cd /tmp # mkfile 256m f1 f2 # zpool create testpool /tmp/f1 # zpool add testpool /tmp/f2 # zpool status testpool pool: testpool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM testpoolONLINE 0 0 0 /tmp/f1 ONLINE 0 0 0 /tmp/f2 ONLINE 0 0 0 errors: No known data errors Would it be beneficial to have a command line option to zpool that would only preview or do a dry-run through the changes, but instead just display what the pool would look like after the operation and leave the pool unchanged? For those that very rarely make pool changes, getting in the habit of always using an option like this might be a good way to ensure the change is really what is desired. Some information that might be nice to see would be the before and after versions of zpool list, the zpool status, and what command could be run to reverse the change, or a warning if the change is irreversible like the case with zpool add. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Accidentally added disk instead of attaching
On Mon, Dec 7, 2009 at 4:32 PM, Ed Plese e...@edplese.com wrote: Would it be beneficial to have a command line option to zpool that would only preview or do a dry-run through the changes, but instead just display what the pool would look like after the operation and leave the pool unchanged? For those that very rarely make pool changes, getting in the habit of always using an option like this might be a good way to ensure the change is really what is desired. There I go requesting features that are already there. zpool add already has a -n option that will show the pool configuration that would be used, without actually making any changes. A couple other subcommands accept the -n option as well. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Comstar thin provisioning space reclamation
You can reclaim this space with the SDelete utility from Microsoft. With the -c option it will zero any free space on the volume. For example: C:\sdelete -c C: I've tested this with xVM and with compression enabled for the zvol, but it worked very well. Ed Plese On Tue, Nov 17, 2009 at 12:13 PM, Brent Jones br...@servuhome.net wrote: I use several file-backed thin provisioned iSCSI volumes presented over Comstar. The initiators are Windows 2003/2008 systems with the MS MPIO initiator. The Windows systems only claim to be using about 4TB of space, but the ZFS volume says 7.12TB is used. Granted, I imagine ZFS allocates the blocks as soon as Windows needs space, and Windows will eventually not need that space again. Is there a way to reclaim un-used space on a thin provisioned iSCSI target? -- Brent Jones br...@servuhome.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Manual drive failure?
Tim, I think you're looking for zpool offline: zpool offline [-t] pool device ... Takes the specified physical device offline. While the device is offline, no attempt is made to read or write to the device. This command is not applicable to spares or cache dev- ices. -tTemporary. Upon reboot, the specified physical device reverts to its previous state. Ed Plese On Wed, Nov 11, 2009 at 12:15 PM, Tim Cook t...@cook.ms wrote: So, I've done a bit of research and RTFM, and haven't found an answer. If I've missed something obvious, please point me in the right direction. Is there a way to manually fail a drive via ZFS? (this is a raid-z2 raidset) In my case, I'm pre-emptively replacing old drives with newer, faster, larger drives. So far, I've only been able to come up with two solutions to the issue, neither of which is very graceful. The first option is to simply yank the old drive out of the chassis. I could go on at-length about why I dislike doing that, but I think it's safe to say everyone agrees this isn't a good option. The second option is to export the zpool, then I can cfgadm -c disconnect the drive, and finally gracefully pull it from the system. Unfortunately, this means my data has to go offline. While that's not a big deal for a home box, it is for something in the enterprise with uptime concerns. From my experimentation, you can't disconnect or unconfigure a drive that is part of a live zpool. So, is there a way to tell zfs to pre-emptively fail it so that you can use cfgadm to put the drive into a state for a graceful hotswap? Am I just missing something obvious? Detach seems to only apply to mirrors and hot spares. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS scalability in terms of file system count (or lack thereof) in S10U6
On Sun, Oct 19, 2008 at 4:08 PM, Paul B. Henson [EMAIL PROTECTED] wrote: At about 5000 filesystems, it starts taking over 30 seconds to create/delete additional filesystems. The biggest problem I ran into was the boot time, specifically when zfs volinit is executing. With ~3500 filesystems on S10U3 the boot time for our X4500 was around 40 minutes. Any idea what your boot time is like with that many filesystems on the newer releases? Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS with Samba and the previous versions-tab under Windows explorer
On Fri, Aug 1, 2008 at 3:12 PM, Rene [EMAIL PROTECTED] wrote: [sambatest] comment = samba_with_shadowcopies testing area path = /export/sambatest read only = no vfs objects = shadow_copy shadow_copy: sort = desc shadow_copy: path = /export/sambatest/renny/.zfs/snapshot shadow_copy: format = $Y.$m.$d-$H.$M.$S shadow_copy: sort = desc Try changing the path to /export/sambatest/renny so that it matches the shadow_copy: path except for the /.zfs/snapshot part. The module doesn't work when the ZFS filesystem is a subdirectory of the Samba share. In addition, make sure that there are actually changes between the snapshots. If there aren't any then the Previous Versions tab may not appear. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS
On Thu, Sep 20, 2007 at 12:49:29PM -0700, Paul B. Henson wrote: I was planning to provide CIFS services via Samba. I noticed a posting a while back from a Sun engineer working on integrating NFSv4/ZFS ACL support into Samba, but I'm not sure if that was ever completed and shipped either in the Sun version or pending inclusion in the official version, does anyone happen to have an update on that? Also, I saw a patch proposing a different implementation of shadow copies that better supported ZFS snapshots, any thoughts on that would also be appreciated. This work is done and, AFAIK, has been integrated into S10 8/07. Excellent. I did a little further research myself on the Samba mailing lists, and it looks like ZFS ACL support was merged into the official 3.0.26 release. Unfortunately, the patch to improve shadow copy performance on top of ZFS still appears to be floating around the technical mailing list under discussion. ZFS ACL support was going to be merged into 3.0.26 but 3.0.26 ended up being a security fix release and the merge got pushed back. The next release will be 3.2.0 and ACL support will be in there. As others have pointed out though, Samba is included in Solaris 10 Update 4 along with support for ZFS ACLs, Active Directory, and SMF. The patches for the shadow copy module can be found here: http://www.edplese.com/samba-with-zfs.html There are hopefully only a few minor changes that I need to make to them before submitting them again to the Samba team. I recently compiled the module for someone to use with Samba as shipped with U4 and he reported that it worked well. I've made the compiled module available on this page as well if anyone is interested in testing it. The patch doesn't improve performance anymore in order to preserve backwards compatibility with the existing module but adds usability enhancements for both admins and end-users. It allows shadow copy functionality to just work with ZFS snapshots without having to create symlinks to each snapshot in the root of each share. For end-users it allows the Previous Versions list to be sorted chronologically to make it easier to use. If performance is an issue the patch can be modified to improve performance like the original patch did but this only affects directory listings and is likely negligible in most cases. Is there any facility for managing ZFS remotely? We have a central identity management system that automatically provisions resources as necessary for [...] This is a loaded question. There is a webconsole interface to ZFS which can be run from most browsers. But I think you'll find that the CLI is easier for remote management. Perhaps I should have been more clear -- a remote facility available via programmatic access, not manual user direct access. If I wanted to do something myself, I would absolutely login to the system and use the CLI. However, the question was regarding an automated process. For example, our Perl-based identity management system might create a user in the middle of the night based on the appearance in our authoritative database of that user's identity, and need to create a ZFS filesystem and quota for that user. So, I need to be able to manipulate ZFS remotely via a programmatic API. While it won't help you in your case since your users access the files using protocols other than CIFS, if you use only CIFS it's possible to configure Samba to automatically create a user's home directory the first time the user connects to the server. This is done using the root preexec share option in smb.conf and an example is provided at the above URL. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zones on large ZFS filesystems
On Thu, Mar 29, 2007 at 01:18:31PM +0300, Niclas Sodergard wrote: Sorry for crossposting but it seems I have stumbled upon a problem that affects both. I have a V490 running Solaris 10u3 with a 16x750GB raid array connected to it. I've created an 8TB zfs filesystem called data1 and created a zfs filesystem called data1/zones mounted to /zones. The structure looks like this data1 208K 8.03T 24.5K /data1 data1/zones103K 8.03T 29.5K /zones data1/zones/mytest24.5K 8.03T 24.5K /zones/mytest When I execute zoneadm -z mytest install I get the following error: # zoneadm -z mytest install zoneadm: /zones/mytest: Value too large for defined data type could not verify zonepath /zones/mytest because of the above errors. zoneadm: zone mytest failed to verify Is this due to a too large filesystem or something like this? Sure, 8TB is quite big but not that large by todays standard (and yes, we really want an 8TB filesystem here. it is not a production system). If I look at the truss output it looks like a call to statvfs() returns EOVERFLOW which probably isn't a good thing. Is there a solution here but to move the zone root to a smaller disk? Set a quota (10G should work just fine) on the filesystem and then perform the zone install. Afterwards remove the quota. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Samba ACLs en ZFS
On Wed, Feb 21, 2007 at 11:21:27AM +0100, Rodrigo Ler?a wrote: If I don't use any special ACL with Samba and ZFS, only each user can write and read from his home directory. I am affected with the incompatibility? Samba runs as the requesting user during file access. Because of that, any file permissions or ACLs are respected even if Samba doesn't have support for the ACLs. The main thing that Samba support for ZFS ACLs will bring is the ability to view and set the ACLs from a Windows client and in particular through the normal Windows ACL GUI. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS with Samba Shadow Copy
For those who work with Samba, I've setup a page about the ZFS Shadow Copy VFS module for Samba that I've been working on. This module helps to make ZFS snapshots easily navigatable through a Microsoft-provided GUI in Windows which can be used to provide end users with an easy way to restore files from their snapshots. See http://www.edplese.com/samba-with-zfs.html (at the bottom of the page) for more info. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: ZFS with Samba Shadow Copy
On Fri, Dec 15, 2006 at 09:33:51AM -0800, Jeb Campbell wrote: One thing, on the home dir create, would you want to chown it to the user? Yes. I guess I made my minimal example a bit too minimal. I updated it to chown to the user and also added a 'zfs set quota' which may be fairly common as well. Now if we can just get Samba+ZFS acls, we would really be rocking. From this thread it sounds like they have already been added to Samba, or at least they are very close to having them working: http://archives.free.net.ph/thread/20061207.100738.a8abc689.en.html#20061207.100738.a8abc689 Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Mirrored Raidz
On Tue, Oct 24, 2006 at 03:31:41PM -0400, Dale Ghent wrote: Okay, then if the person can stand to lose even more space, do zfs mirroring on each JBOD. Then we'd have a mirror of mirrors instead of a mirror of raidz's. Remember, the OP wanted chassis-level redundancy as well as redundancy within the domain of each chassis. You can't do that now with ZFS unless you combine ZFS with SVM. If you have the disk space to let you do a mirror of mirrors, you could create the pool from a series of 4 way mirrors, with each mirror containing 2 disks from each JBOD enclosure. This would give you the ability to sustain the simultaneous failure of an entire enclosure and at least one (though sometimes multiple) disk failure in the working enclosure. This would also be a pure ZFS solution. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] A versioning FS
On Fri, Oct 06, 2006 at 09:40:22AM +0200, [EMAIL PROTECTED] wrote: Example (real life scenario): there is a samba server for about 200 concurrent connected users. They keep mainly doc/xls files on the server. From time to time they (somehow) currupt their files (they share the files so it is possible) so they are recovered from backup. Having versioning they could be said that if their main file is corrupted they can open previous version and keep working. ZFS snapshots is not solution in this case because we would have to create snapshots for 400 filesystems (yes, each user has its filesystem and I said that there are 200 concurrent connections but there much more accounts on the server) each hour or so. Why is creating that many snapshots a problem? The somewhat recent addition of recursive snapshots (zfs snapshot -r) reduces this to a single command. Taking individual snapshots of each filesystem can take a decent amount of time, but I was under the impression that recursive snapshots would be much faster due to the snapshots being committed in a single transaction. Is this not correct? Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: mkdir == zfs create
On Thu, Sep 28, 2006 at 09:00:47AM -0700, Ron Halstead wrote: Please elaborate: CIFS just requires the automount hack. Samba's smb.conf supports a root preexec parameter that allows a program to be run when a share is connected to. For example, with a simple script, createhome.sh, like, #!/usr/bin/bash if [ ! -e /tank/home/$1 ]; then zfs create tank/home/$1 fi and a [homes] share in smb.conf like, [homes] comment = User Home Directories browseable = no writable = yes root preexec = createhome.sh '%U' Samba will automatically create a ZFS filesystem for each user's home directory the first time the user connects to the server. You'd likely want to expand on this to have it properly set the permissions, perform some additional checks and logging, etc. This can be elaborated on to do neat things like create a ZFS clone when a client connects and then destroy the clone when the client disconnects (via root postexec). This could possibly be useful for the shared build system that was mentioned by an earlier post. To truely replace every mkdir call you can write a fairly simple VFS module for Samba that would replace every mkdir call with a call to zfs create. This method is a bit more involved than the above method since the VFS modules are coded in C, but it's definitely a possibility. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Apple Time Machine
On Mon, Aug 07, 2006 at 12:08:17PM -0700, Eric Schrock wrote: Yeah, I just noticed this line: Backup Time: Time Machine will back up every night at midnight, unless you select a different time from this menu. So this is just standard backups, with a (very) slick GUI layered on top. From the impression of the text-only rumor feed, it sounded more impressive, from a filesystem implementation perspective. Still, the GUI integration is pretty nice, and implies that they're backups are in some easily accessed form. Otherwise, extracting hundreds of files from a compressed stream would induce too much delay for the interactive stuff they describe. You can achieve similar functionality (though not really integrated into all core applications) with Windows XP/2003 by utilizing Samba, ZFS, and Microsoft's Shadow Copy Client. The Shadow Copy Client provides a straightforward GUI for Windows Explorer that lets you browse, open, and restore previous versions of files while ZFS provides an excellent source of snapshots for the Shadow Copy Client to look to for the previous versions. A quick Google search turned up the following URL which has some screenshots to illustrate what the Shadow Copy Client looks like. The default shadow copy VFS module for Samba doesn't work very well with ZFS but after some modifications it provides very good integration of ZFS with Windows Explorer. If anyone is interested I can post the Samba patch to the list. Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Apple Time Machine
On Mon, Aug 07, 2006 at 02:36:27PM -0500, Ed Plese wrote: A quick Google search turned up the following URL which has some screenshots to illustrate what the Shadow Copy Client looks like. Oops.. forgot the URL: http://www.petri.co.il/how_to_use_the_shadow_copy_client.htm Ed Plese ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss