Re: [zfs-discuss] deduplication
On Fri, Jul 17, 2009 at 2:42 PM, Brandon High bh...@freaks.com wrote: The keynote was given on Wednesday. Any more willingness to discuss dedup on the list now? The following video contains a de-duplication overview from Bill and Jeff: https://slx.sun.com/1179275620 Hope this helps, - Ryan -- http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Throttling application I/O
Howdy, We are running one write intensive and one read intensive Java process on a server, and would like to give preferential I/O treatment to the read intensive process. There was a discussion a while back about adding throttling measures to ZFS, and I was curious if this feature allows you to throttle application I/O? We can implement throttling in the app, but I am hoping there is a way to throttle applications from inside the kernel. Any thoughts or suggestions are welcome. - Ryan -- http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs comparison
On Jan 18, 2008 7:35 AM, Darren J Moffat [EMAIL PROTECTED] wrote: Sengor wrote: On 1/17/08, Darren J Moffat [EMAIL PROTECTED] wrote: Pardon my ignorance, but is ZFS with compression safe to use in a production environment? Yes, why wouldn't it be ? If it wasn't safe it wouldn't have been delivered. Few reasons - http://prefetch.net/blog/index.php/2007/11/28/is-zfs-ready-for-primetime/ The article (not the comments) is complete free of content and is scare mongering. It doesn't even say wither this is ZFS on Solaris (vs BSD or MacOS X) never mind what release or the configuration of the pool or even what the actual bug apparently. Was the pool redundant if there were bugs what are the bug numbers and are they fixed. I don't think this is scare mongering at all. I wrote the blog entry after a ZFS bug (6454482) corrupted a pool on one of our production servers, and a yet unidentified bug (which appears to be different than 6454482) corrupted a pool on another system. ZFS is an incredible file system, but based on the fact that we lost data twice, I am somewhat hesitant to continue to using it. Just my .02, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Oracle10
On 10/8/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Is there any collateral that I could share with a prospective customer on ZFS and Oracle 10 ? The following documents (as well as the blogs from the ZFS developers) contain some useful information related to database tuning (including Oracle): http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] TCP connections not getting cleaned up after application exits
Howdy, We are running zones on a number of Solaris 10 update 3 hosts, and we are bumping into an issue where the kernel doesn't clean up connections after an application exits. When this issue occurs, the netstat utility doesn't show anything listening on the port the application uses (8080 in the example below), but connections are still listed in the ESTABLISHED state: $ netstat -an | grep LISTEN *.22 *.*0 0 49152 0 LISTEN 127.0.0.1.25 *.*0 0 49152 0 LISTEN 127.0.0.1.587 *.*0 0 49152 0 LISTEN $ netstat -an | grep ESTAB | grep 8080 10.32.51.230.808010.10.12.6.3425265535 0 49248 0 ESTABLISHED 10.32.51.230.808010.10.12.7.54136 1 0 49680 0 ESTABLISHED 10.32.51.230.808010.10.12.8.1933562975 0 49248 0 ESTABLISHED Normally I would open a ticket with Sun when I bump into issues with Solaris 10, but I couldn't find anything in the bug database to indicate this was a known problem. Does anyone happen to know if this is a known issue? I rebooted the server with the '-d' option the last time this issue occurred, so I have a core file available if anyone is interested in investigating the issue (assuming this is an unknown problem). Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Bugs fixed in update 4?
On 9/18/07, Larry Wake [EMAIL PROTECTED] wrote: Matty wrote: George Wilson put together a list of ZFS enhancements and bug fixes that were integrated into Solaris 10 update 3, and I was curious if there was something similar for update 4? There have been a bunch of reliability and performance enhancements to the ZFS code over the past few months, and I am curious which ones were integrated into the latest Solaris 10 update (I can't seem to find a full list of bugs fixed -- if there is one, please let me know where the bug fixes are documented). Thanks for any insight, - Ryan This isn't specific to ZFS, but the Solaris 10 8/07 release notes doc has a list of all patches included and the bugs they address: http://docs.sun.com/app/docs/doc/820-1259/ . Hi Larry, I only see 3 - 4 CRs related to ZFS in that list. Did any additional bug fixes make it into update 4 that are not included in the patch list? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?
On 9/18/07, Neil Perrin [EMAIL PROTECTED] wrote: Separate log devices (slogs) didn't make it into S10U4 but will be in U5. This is awesome! Will the SYNC_NV support that was integrated this week be added to update 5 as well? That would be super useful, assuming the major arrays vendors support it. Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 $c vpanic() 0xfb9b49f3() space_map_remove+0x239() space_map_load+0x17d() metaslab_activate+0x6f() metaslab_group_alloc+0x187() metaslab_alloc_dva+0xab() metaslab_alloc+0x51() zio_dva_allocate+0x3f() zio_next_stage+0x72() zio_checksum_generate+0x5f() zio_next_stage+0x72() zio_write_compress+0x136() zio_next_stage+0x72() zio_wait_for_children+0x49() zio_wait_children_ready+0x15() zio_next_stage_async+0xae() zio_wait+0x2d() arc_write+0xcc() dmu_objset_sync+0x141() dsl_dataset_sync+0x23() dsl_pool_sync+0x7b() spa_sync+0x116() txg_sync_thread+0x115() thread_start+8() It appears ZFS is still able to read the labels from the drive: $ zdb -lv /dev/rdsk/c3t50002AC00039040Bd0p0 LABEL 0 version=3 name='fpool0' state=0 txg=4 pool_guid=10406529929620343615 top_guid=3365726235666077346 guid=3365726235666077346 vdev_tree type='disk' id=0 guid=3365726235666077346 path='/dev/dsk/c3t50002AC00039040Bd0p0' devid='id1,[EMAIL PROTECTED]/q' whole_disk=0 metaslab_array=13 metaslab_shift=31 ashift=9 asize=322117566464 LABEL 1 version=3 name='fpool0' state=0 txg=4 pool_guid=10406529929620343615 top_guid=3365726235666077346 guid=3365726235666077346 vdev_tree type='disk' id=0 guid=3365726235666077346 path='/dev/dsk/c3t50002AC00039040Bd0p0' devid='id1,[EMAIL PROTECTED]/q' whole_disk=0 metaslab_array=13 metaslab_shift=31 ashift=9 asize=322117566464 LABEL 2 version=3 name='fpool0' state=0 txg=4 pool_guid=10406529929620343615 top_guid=3365726235666077346 guid=3365726235666077346 vdev_tree type='disk' id=0 guid=3365726235666077346 path='/dev/dsk/c3t50002AC00039040Bd0p0' devid='id1,[EMAIL PROTECTED]/q' whole_disk=0 metaslab_array=13 metaslab_shift=31 ashift=9 asize=322117566464 LABEL 3 version=3 name='fpool0' state=0 txg=4 pool_guid=10406529929620343615 top_guid=3365726235666077346 guid=3365726235666077346 vdev_tree type='disk' id=0 guid=3365726235666077346 path='/dev/dsk/c3t50002AC00039040Bd0p0' devid='id1,[EMAIL PROTECTED]/q' whole_disk=0 metaslab_array=13 metaslab_shift=31 ashift=9 asize=322117566464 But for some reason it is unable to open the pool: $ zdb -c fpool0 zdb: can't open fpool0: error 2 I saw several bugs related to space_map.c, but the stack traces listed in the bug reports were different than the one listed above. Has anyone seen this bug before? Is there anyway to recover from it? Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] compression=on and zpool attach
On 9/12/07, Mike DeMarco [EMAIL PROTECTED] wrote: Striping several disks together with a stripe width that is tuned for your data model is how you could get your performance up. Stripping has been left out of the ZFS model for some reason. Where it is true that RAIDZ will stripe the data across a given drive set it does not give you the option to tune the stripe width. Do to the write performance problems of RAIDZ you may not get a performance boost from it stripping if your write to read ratio is too high since the driver has to calculate parity for each write. I am not sure why you think striping has been left out of the ZFS model. If you create a ZFS pool without the raidz or mirror keywords, the pool will be striped. Also, the recordsize tunable can be useful for matching up application I/O to physical I/O. Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Supporting recordsizes larger than 128K?
Are there any plans to support record sizes larger than 128k? We use ZFS file systems for disk staging on our backup servers (compression is a nice feature here), and we typically configure the disk staging process to read and write large blocks (typically 1MB or so). This reduces the number of I/Os that take place to our storage arrays, and our testing has shown that we can push considerably more I/O with 1MB+ block sizes. Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS
On 5/31/07, Darren J Moffat [EMAIL PROTECTED] wrote: Since you are doing iSCSI and may not be running ZFS on the initiator (client) then I highly recommend that you run with IPsec using at least AH (or ESP with Authentication) to protect the transport. Don't assume that your network is reliable. ZFS won't help you here if it isn't running on the iSCSI initiator, and even if it is it would need two targets to be able to repair. If you don't intend to encrypt the iSCSI headers / payloads, why not just use the header and data digests that are part of the iSCSI protocol? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ARC, mmap, pagecache...
On 5/4/07, Roch - PAE [EMAIL PROTECTED] wrote: Manoj Joseph writes: Hi, I was wondering about the ARC and its interaction with the VM pagecache... When a file on a ZFS filesystem is mmaped, does the ARC cache get mapped to the process' virtual memory? Or is there another copy? My understanding is, The ARC does not get mapped to user space. The data ends up in the ARC (recordsize chunks) and in the page cache (in page chunks). Both copies are updated on writes. If that is the case, are there any plans to unify the ARC and the page cache? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Permanently removing vdevs from a pool
On 4/20/07, George Wilson [EMAIL PROTECTED] wrote: This is a high priority for us and is actively being worked. Vague enough for you. :-) Sorry I can't give you anything more exact that that. Hi George, If ZFS is supposed to be part of opensolaris, then why can't the community get additional details? If really seems like much of the development and design of ZFS goes on behind closed doors, and the community as a whole is involved after the fact (Eric Shrock has requested feedback from list members, which is awesome!). This makes it difficult for folks to contribute, and to offer suggestions (or code) that would better ZFS as a whole. Is there a reason that more of the ZFS development discussions aren't occuring in public? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Permanently removing vdevs from a pool
On 4/19/07, Mark J Musante [EMAIL PROTECTED] wrote: On Thu, 19 Apr 2007, Mario Goebbels wrote: Is it possible to gracefully and permanently remove a vdev from a pool without data loss? Is this what you're looking for? http://bugs.opensolaris.org/view_bug.do?bug_id=4852783 If so, the answer is 'not yet'. Can the ZFS team comment on how far out this feature is? There are a couple of items 9and bugs) that are preventing us from deploying ZFS, and this is one of them. Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: Re[6]: [zfs-discuss] ZFS Boot support for the x86 platform
Howdy, This is awesome news that ZFS boot support is available for x86 platforms. Do any of the ZFS developers happen to know when ZFS boot support for SPARC will be available? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Re: update on zfs boot support
On 3/11/07, Robert Milkowski [EMAIL PROTECTED] wrote: IW Got it, thanks, and a more general question, in a single disk IW root pool scenario, what advantage zfs will provide over ufs w/ IW logging? And when zfs boot integrated in neveda, will live upgrade work with zfs root? Snapshots/clones + live upgrade or standard patching. Additionally no more hassle with separate /opt /var ... I am curious how snapshots and clones will be integrated with grub. Will it be posible to boot from a snapshot? I think this would be useful when applying patches, since you could snapshot / ,/var and /opt, patch the system, and revert back (by choosing a snapshot from the grub menu) to the snapshot if something went awry. Is this how the zfs boot team envisions this working? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Re: update on zfs boot support
On 3/11/07, Lin Ling [EMAIL PROTECTED] wrote: Matty wrote: I am curious how snapshots and clones will be integrated with grub. Will it be posible to boot from a snapshot? I think this would be useful when applying patches, since you could snapshot / ,/var and /opt, patch the system, and revert back (by choosing a snapshot from the grub menu) to the snapshot if something went awry. Is this how the zfs boot team envisions this working? You can snapshot/clone, and revert back by choosing the clone from the grub menu to boot. Since snapshot is a read-only filesystem, directly booting from it is not supported for the initial release. However, it is on our to-investigate list. How will /boot/grub/menu.lst be updated? Will the admin have to run bootadm after the root clone is created, or will the zfs utility be enhanced to populate / remove entries from the menu.lst? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs and iscsi: cannot open device: I/O error
On 2/26/07, cedric briner [EMAIL PROTECTED] wrote: hello, I'm trying to consolidate my HDs in a cheap but (I hope) reliable manner. To do so, I was thinking to use zfs over iscsi. Unfortunately, I'm having some issue with it, when I do: # iscsi server (nexenta alpha 5) # svcadm enable iscsitgt iscsitadm delete target --lun 0 vol-1 iscsitadm list target # empty iscsitadm create target -b /dev/dsk/c0d0s5 vol-1 iscsitadm list target # not empty Target: vol-1 iSCSI Name: iqn.1986-03.com.sun:02:662bd119-1660-6141-cea7-dd799d53b254.vol-1 Connections: 0 #iscsi client (solaris 5.10, up-to-date) # iscsiadm add discovery-address 10.194.67.111 # (iscsi server) iscsiadm modify discovery --sendtargets enable iscsiadm list discovery-address # not empty iscsiadm list target # not empty Target: iqn.1986-03.com.sun:02:662bd119-1660-6141-cea7-dd799d53b254.vol-1 Alias: vol-1 TPGT: 1 ISID: 402a Connections: 1 devfsadm -i iscsi # to create the device on sf3 iscsiadm list target -Sv| egrep 'OS Device|Peer|Alias' # not empty Alias: vol-1 IP address (Peer): 10.194.67.111:3260 OS Device Name: /dev/rdsk/c1t014005A267C12A0045E2F524d0s2 zpool create tank c1t014005A267C12A0045E2F524d0s2 cannot open '/dev/dsk/c1t014005A267C12A0045E2F524d0s2': I/O error #- The error was produced when using the type mode ``disk'' for iscsi. I've follow the advise of Roch:to try the different type of iscsi: disk|raw|tape but unfortunately the only type who accepts the: ``iscsitadm create target -b /dev/dsk/c0d0s5'' is the type disk which doesn't work. Any idea of what I could do to improve this. Does the device /dev/dsk/c1t014005A267C12A0045E2F524d0s2 exist (you can run `ls /dev/dsk/c1t014005A267C12A0045E2F524d0s2' to see)? What happens if you try to access the device after running devfsadm -C? - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS fragmentation
Howdy, I have seen a number of folks run into issues due to ZFS file system fragmentation, and was curious if anyone on team ZFS is working on this issue? Would it be possible to share with the list any changes that will be made to to help address fragmentation problems? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Why doesn't Solaris remove a faulty disk from operation?
Howdy, On one of my Solaris 10 11/06 servers, I am getting numerous errors similar to the following: Feb 11 09:30:23 rx scsi: WARNING: /[EMAIL PROTECTED],2000/[EMAIL PROTECTED],1/[EMAIL PROTECTED],0 (sd1): Feb 11 09:30:23 rx Error for Command: write(10) Error Level: Retryable Feb 11 09:30:23 rx scsi:Requested Block: 58458343 Error Block: 58458343 Feb 11 09:30:23 rx scsi:Vendor: SEAGATE Serial Number: 0404A72YCG Feb 11 09:30:23 rx scsi:Sense Key: Hardware Error Feb 11 09:30:23 rx scsi:ASC: 0x19 (defect list error), ASCQ: 0x0, FRU: 0x2 Feb 11 09:32:18 rx scsi: WARNING: /[EMAIL PROTECTED],2000/[EMAIL PROTECTED],1/[EMAIL PROTECTED],0 (sd1): Feb 11 09:32:18 rx Error for Command: write(10) Error Level: Retryable Feb 11 09:32:18 rx scsi:Requested Block: 58696759 Error Block: 58696501 Feb 11 09:32:18 rx scsi:Vendor: SEAGATE Serial Number: 0404A72YCG Feb 11 09:32:18 rx scsi:Sense Key: Media Error Feb 11 09:32:18 rx scsi:ASC: 0xc (write error - auto reallocation failed), ASCQ: 0x2, FRU: 0x1 Assuming I am reading the error message correctly, it looks like the disk drive (c2t2d0) has used up all of all of the spare sectors used to reallocate bad sectors. If this is the case, is there a reason Solaris doesn't offline the drive? This would allow ZFS to evict the faulty disk from my pool, and kick in the spare disk drive I have configured: $ zpool status -v pool: rz2pool state: ONLINE scrub: scrub completed with 0 errors on Sat Feb 10 18:46:54 2007 config: NAME STATE READ WRITE CKSUM rz2pool ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c1t9d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 c1t12d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 spares c2t3d0 AVAIL errors: No known data errors Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: [storage-discuss] Why doesn't Solaris remove a faulty disk from operation?
On 2/11/07, Robert Milkowski [EMAIL PROTECTED] wrote: Hello Matty, Sunday, February 11, 2007, 6:56:14 PM, you wrote: M Howdy, M On one of my Solaris 10 11/06 servers, I am getting numerous errors M similar to the following: AFAIK nothing was integrated yet to do it. Hot Spare will kick in automatically only when zfs can't open a device other than that you are on manual mode for now. Yikes! Does anyone from the ZFS / storage team happen to know when work will complete to detect and replace failed disk drives? If hot spares don't actually kick in to replace failed drives, is there any value in using them? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Work arounds for bug #6456888?
Howdy, We bumped into the issues described in bug #6456888 on one of our production systems, and I was curious if any progress has been made on this bug? Are there any workarounds available for this issue (the work around section in the bug is empty)? Thanks, - Ryan -- UNIX Administrator http://prefetch.net This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: [storage-discuss] ZFS/iSCSI target integration
On Wed, 1 Nov 2006, Adam Leventhal wrote: Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I'll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8--- iSCSI/ZFS Integration A. Overview The goal of this project is to couple ZFS with the iSCSI target in Solaris specifically to make it as easy to create and export ZVOLs via iSCSI as it is to create and export ZFS filesystems via NFS. We will add two new ZFS properties to support this feature. shareiscsi Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should be exported as an iSCSI target. The acceptable values for this property are 'on', 'off', and 'direct'. In the future, we may support other target types (for example, 'tape'). The default is 'off'. This property may be set on filesystems, but has no direct effect; this is to allow ZVOLs created under the ZFS hierarchy to inherit a default. For example, an administrator may want ZVOLs to be shared by default, and so set 'shareiscsi=on' for the pool. iscsioptions This property, which is hidden by default, is used by the iSCSI target daemon to store persistent information such as the IQN. The contents are not intended for users or external consumers. B. Examples iSCSI targets are simple to create with the zfs(1M) command: # zfs create -V 100M pool/volumes/v1 # zfs set shareiscsi=on pool/volumes/v1 # iscsitadm list target Target: pool/volumes/v1 iSCSI Name: iqn.1986-03.com.sun:02:4db92521-f5dc-cde4-9cd5-a3f6f567220a Connections: 0 Renaming the ZVOL has the expected result for the iSCSI target: # zfs rename pool/volumes/v1 pool/volumes/stuff # iscsitadm list target Target: pool/volumes/stuff iSCSI Name: iqn.1986-03.com.sun:02:4db92521-f5dc-cde4-9cd5-a3f6f567220a Connections: 0 Note that per the iSCSI specification (RFC3720), the iSCSI Name is unchanged after the ZVOL is renamed. Exporting a pool containing a shared ZVOL will cause the target to be removed; importing a pool containing a shared ZVOL will cause the target to be shared: # zpool export pool # iscsitadm list target # zpool import pool # iscsitadm list target Target: pool/volumes/stuff iSCSI Name: iqn.1986-03.com.sun:02:4db92521-f5dc-cde4-9cd5-a3f6f567220a Connections: 0 Note again that all configuration information is stored with the dataset. As with NFS shared filesystems, iSCSI targets imported on a different system will be shared appropriately. This is super useful! Will ACLs and aliases be stored as properties? Could you post the list of available iSCSI properties to the list? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Automatic Device Error Notification?
On 10/31/06, Wes Williams [EMAIL PROTECTED] wrote: Okay, so now that I'm planning to build my NAS using ZFS, I now need to devise or learn of a preexisting method to receive notification of ZFS handled errors on a remote machine. For example, if a disk fails and I don't regularly login or SSH into the ZFS server, I'd like an email or some other notification immediately of the ZFS event. Are there already ways to do this? I use the smartmontools smartd daemon to email me when disk drives are about to fail. If you are interested in configuring smartd to send email notifications prior to a disk failing, check out the following blog post: http://prefetch.net/blog/index.php/2006/01/05/using-smartd-on-solaris-systems-to-find-disk-drive-problems/ Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Migrating vdevs to new pools
Howdy, We use VxVM quite a bit at my place of employment, and are extremely interested in moving to ZFS to reduce complexity and costs. One useful feature that is in VxVM that doesn't seem to be in ZFS is the ability to migrate vdevs between pools. This is extremely useful when you want to synchronize a set of temporary disks with a set of master disks, and migrate the temporary disks to an alternate server once the synchronization completes (we use this to perform host free backups). Are there any plans to introduce a similar capability in ZFS? I read through the zpool manual page and the ZFS documentation on opensolaris.org, but was unable to find a specific set of options to accomplish this. If I missed something, please let me know. I will glady wander off to read. Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zones root-fs in ZFS ? (fwd)
On Thu, 27 Jul 2006, Eric Schrock wrote: The original reasoning was that we didn't have enough time to validate the behavior of the zone upgrade tools with ZFS as the root filesystem, particularly as these tools (Ashanti, Zulu) are a moving target. Upon closer inspection, we found that this scenario should work with the current upgrade solution. What will definitely not work is to delegate a ZFS dataset to a local zone, and then place system software (i.e. Solaris package contents) within such a filesystem. This should work if the mountpoint is set to 'legacy', however. Basically, rather than trying to explain the ins and out of the current situation (which aren't entirely understood in the context of future zones upgrade solutions), we opted to declare this as 'unsupported'. In reality, putting zone roots on ZFS datasets should be perfectly safe (modulo the caveat above). However, we reserve the right to break upgrade in the face of such zones. We are working on integrating ZFS more closely with the whole install and live upgrade experience. Part of this work will include making zones upgrade behave well regardless of ZFS configuration. The current install/upgrade process has a long history of preconceived notions that don't apply in the ZFS work (such as a 1-1 relationship between devices and filesystems, /etc/vfstab, legacy mountpoints, etc), so this is no small task. Are there any known issues with patching zones that are installed on a ZFS file system? Does smpatch and company work ok with this configuration? Thanks, - Ryan -- UNIX Administrator http://prefetch.net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss