Re: [zfs-discuss] Mirrored Servers

2010-05-08 Thread Ian Collins
On 05/ 9/10 10:07 AM, Tony wrote: Lets say I have two servers, both running opensolaris with ZFS. I basically want to be able to create a filesystem where the two servers have a common volume, that is mirrored between the two. Meaning, each server keeps an identical, real time backup of the

Re: [zfs-discuss] Plugging in a hard drive after Solaris has booted up?

2010-05-07 Thread Ian Collins
On 05/ 8/10 04:38 PM, Giovanni wrote: Hi guys, I have a quick question, I am playing around with ZFS and here's what I did. I created a storage pool with several drives. I unplugged 3 out of 5 drives from the array, currently: NAMESTATE READ WRITE CKSUM gpool

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Ian Collins
On 05/ 6/10 05:32 AM, Richard Elling wrote: On May 4, 2010, at 7:55 AM, Bob Friesenhahn wrote: On Mon, 3 May 2010, Richard Elling wrote: This is not a problem on Solaris 10. It can affect OpenSolaris, though. That's precisely the opposite of what I thought. Care to

Re: [zfs-discuss] Different devices with the same name in zpool status

2010-05-05 Thread Ian Collins
On 05/ 6/10 11:48 AM, Brandon High wrote: I know for certain that my rpool and tank pool are not both using c6t0d0 and c6t1d0, but that's what zpool status is showing. It appears to be an output bug, or a problem with the zpool.cache, since format shows my rpool devices at c8t0d0 and c8t1d0.

Re: [zfs-discuss] why both dedup and compression?

2010-05-05 Thread Ian Collins
On 05/ 6/10 03:35 PM, Richard Jahnel wrote: Hmm... To clarify. Every discussion or benchmarking that I have seen always show both off, compression only or both on. Why never compression off and dedup on? After some further thought... perhaps it's because compression works at the byte level

Re: [zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Ian Collins
On 05/ 5/10 11:09 AM, Brad wrote: I yanked a disk to simulate failure to the test pool to test hot spare failover - everything seemed fine until the copy back completed. The hot spare is still showing in used...do we need to remove the spare from the pool to get it to deattach? Once the

Re: [zfs-discuss] Exporting iSCSI - it's still getting all the ZFS protection, right?

2010-05-03 Thread Ian Collins
On 05/ 4/10 11:33 AM, Michael Shadle wrote: Quick sanity check here. I created a zvol and exported it via iSCSI to a Windows machine so Windows could use it as a block device. Windows formats it as NTFS, thinks it's a local disk, yadda yadda. Is ZFS doing it's magic checksumming and whatnot on

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-03 Thread Ian Collins
On 05/ 4/10 03:39 PM, Richard Elling wrote: On May 3, 2010, at 7:55 PM, Edward Ned Harvey wrote: From: Richard Elling [mailto:richard.ell...@gmail.com] Once you register your original Solaris 10 OS for updates, are you unable to get updates on the removable OS? This is

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-01 Thread Ian Collins
On 05/ 1/10 04:46 PM, Edward Ned Harvey wrote: One more really important gotcha. Let's suppose the version of zfs on the CD supports up to zpool 14. Let's suppose your live system had been fully updated before crash, and let's suppose the zpool had been upgraded to zpool 15. Wouldn't that

Re: [zfs-discuss] Virtual to physical migration

2010-04-30 Thread Ian Collins
On 05/ 1/10 03:09 PM, devsk wrote: Looks like the X's vesa driver can only use 1600x1200 resolution and not the native 1920x1200. Asking these question on the ZFS list isn't going to get you very far. Troy the opensolaris-help list. -- Ian.

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Ian Collins
On 04/30/10 10:35 AM, Bob Friesenhahn wrote: On Thu, 29 Apr 2010, Roy Sigurd Karlsbakk wrote: While there may be some possible optimizations, i'm sure everyone would love the random performance of mirror vdevs, combined with the redundancy of raidz3 and the space of a raidz1. However, as in

Re: [zfs-discuss] backwards/forward compatibility

2010-04-28 Thread Ian Collins
On 04/29/10 10:21 AM, devsk wrote: I had a pool which I created using zfs-fuse, which is using March code base (exact version, I don't know; if someone can tell me the command to find the zpool format version, I would be grateful). Try [zfs|zpool] upgrade. These commands will tell you

Re: [zfs-discuss] The next release

2010-04-28 Thread Ian Collins
On 04/29/10 11:02 AM, autumn Wang wrote: One quick question: When will the next formal release be released? Of what? Does oracle have plan to support OpenSolaris community as Sun did before? What is the direction of ZFS in future? Do you really expect answers to those question

Re: [zfs-discuss] Performance drop during scrub?

2010-04-27 Thread Ian Collins
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote: Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. Is that small random or

Re: [zfs-discuss] Performance drop during scrub?

2010-04-27 Thread Ian Collins
On 04/28/10 10:01 AM, Bob Friesenhahn wrote: On Wed, 28 Apr 2010, Ian Collins wrote: On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote: Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool

Re: [zfs-discuss] Spare in use althought disk is healthy ?

2010-04-26 Thread Ian Collins
On 04/27/10 09:41 AM, Lutz Schumann wrote: Hello list, a pool shows some strange status: volume: zfs01vol state: ONLINE scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38 mirror ONLINE 0 0 0 c2t12d0ONLINE 0

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-25 Thread Ian Collins
On 04/26/10 12:08 AM, Edward Ned Harvey wrote: [why do you snip attributions?] On 04/26/10 01:45 AM, Robert Milkowski wrote: The system should boot-up properly even if some pools are not accessible (except rpool of course). If it is not the case then there is a bug - last time I checked it

Re: [zfs-discuss] [osol-discuss] Identifying what zpools are exported

2010-04-21 Thread Ian Collins
On 04/22/10 06:59 AM, Justin Lee Ewing wrote: So I can obviously see what zpools I have imported... but how do I see pools that have been exported? Kind of like being able to see deported volumes using vxdisk -o alldgs list. zpool import, kind of counter intuitive! -- Ian.

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-20 Thread Ian Collins
On 04/20/10 05:32 PM, Sunil wrote: ouch! My apologies! I did not understand what you were trying to say. I was gearing towards: 1. Using the newer 1TB in the eventual RAIDZ. Newer hardware typically means (slightly) faster access times and sequential throughput. Using a slice on a newer

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread Ian Collins
On 04/19/10 08:42 PM, Ian Garbutt wrote: Having looked through the forum I gather that you cannot just add an additional device to to raidz pool. This being the case is what are the alternatives that I could to expand a raidz pool? Either replace *all* the drives with bigger ones, or add

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Ian Collins
On 04/20/10 04:13 PM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought: 1. Carve a 500GB slice (A) in

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Ian Collins
On 04/20/10 05:00 PM, Sunil wrote: On 04/20/10 04:13 PM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is

Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-17 Thread Ian Collins
On 04/18/10 01:25 AM, Edward Ned Harvey wrote: From: Ian Collins [mailto:i...@ianshome.com] But is a fundamental of zfs: snapshot A read-only version of a file system or volume at a given point in time. It is specified as filesys...@name or vol

Re: [zfs-discuss] ZFS mirror

2010-04-16 Thread Ian Collins
On 04/17/10 09:34 AM, MstAsg wrote: I have a question. I have a disk that solaris 10 zfs is installed. I wanted to add the other disks and replace this with the other. (totally three others). If I do this, I add some other disks, would the data be written immediately? Or only the new data is

Re: [zfs-discuss] ZFS mirror

2010-04-16 Thread Ian Collins
On 04/17/10 10:09 AM, Richard Elling wrote: On Apr 16, 2010, at 2:49 PM, Ian Collins wrote: On 04/17/10 09:34 AM, MstAsg wrote: I have a question. I have a disk that solaris 10 zfs is installed. I wanted to add the other disks and replace this with the other. (totally three

Re: [zfs-discuss] Making an rpool smaller?

2010-04-16 Thread Ian Collins
On 04/17/10 11:41 AM, Brandon High wrote: When I set up my opensolaris system at home, I just grabbed a 160 GB drive that I had sitting around to use for the rpool. Now I'm thinking of moving the rpool to another disk, probably ssd, and I don't really want to shell out the money for two 160 GB

Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-16 Thread Ian Collins
On 04/17/10 12:56 PM, Edward Ned Harvey wrote: From: Erik Trimble [mailto:erik.trim...@oracle.com] Sent: Friday, April 16, 2010 7:35 PM Doesn't that defeat the purpose of a snapshot? Eric hits the nail right on the head: you *don't* want to support such a feature, as it breaks

Re: [zfs-discuss] ZFS panic

2010-04-14 Thread Ian Collins
On 04/ 2/10 10:25 AM, Ian Collins wrote: Is this callstack familiar to anyone? It just happened on a Solaris 10 update 8 box: genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 () genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 () genunix: [ID 655072

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Ian Collins
On 04/15/10 06:16 AM, David Dyer-Bennet wrote: Because 132 was the most current last time I paid much attention :-). As I say, I'm currently holding out for 2010.$Spring, but knowing how to get to a particular build via package would be potentially interesting for the future still. I hope

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Ian Collins
On 04/12/10 05:39 PM, Willard Korfhage wrote: IT is a Corsair 650W modular power supply, with 2 or 3 disks per cable. However, the Areca card is not reporting any errors, so I think power to the disks is unlikely to be a problem. Here's what is in /var/adm/messages Apr 11 22:37:41 fs9 fmd:

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Ian Collins
On 04/11/10 11:55 AM, Harry Putnam wrote: Would you mind expanding the abbrevs: ssd zil 12arc? http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Replacing disk in zfs pool

2010-04-09 Thread Ian Collins
On 04/ 9/10 08:58 PM, Andreas Höschler wrote: zpool attach tank c1t7d0 c1t6d0 This hopefully gives me a three-way mirror: mirror ONLINE 0 0 0 c1t15d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c1t6d0

Re: [zfs-discuss] zfs send hangs

2010-04-09 Thread Ian Collins
On 04/10/10 06:20 AM, Daniel Bakken wrote: My zfs filesystem hangs when transferring large filesystems (500GB) with a couple dozen snapshots between servers using zfs send/receive with netcat. The transfer hangs about halfway through and is unkillable, freezing all IO to the filesystem,

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Ian Collins
On 04/ 9/10 10:48 AM, Erik Trimble wrote: Well The problem is (and this isn't just a ZFS issue) that resilver and scrub times /are/ very bad for1TB disks. This goes directly to the problem of redundancy - if you don't really care about resilver/scrub issues, then you really shouldn't

Re: [zfs-discuss] To slice, or not to slice

2010-04-02 Thread Ian Collins
On 04/ 3/10 10:23 AM, Edward Ned Harvey wrote: Momentarily, I will begin scouring the omniscient interweb for information, but I’d like to know a little bit of what people would say here. The question is to slice, or not to slice, disks before using them in a zpool. Not. One reason to

[zfs-discuss] ZFS panic

2010-04-01 Thread Ian Collins
Is this callstack familiar to anyone? It just happened on a Solaris 10 update 8 box: genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 () genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 () genunix: [ID 655072 kern.notice] fe8000d1b920

Re: [zfs-discuss] RAID-Z with Permanent errors detected in files

2010-04-01 Thread Ian Collins
On 04/ 2/10 02:52 PM, Andrej Gortchivkin wrote: Hi All, I just got across a strange (well... at least for me) situation with ZFS and I hope you might be able to help me out. Recently I built a new machine from scratch for my storage needs which include various CIFS / NFS and most importantly

Re: [zfs-discuss] RAID-Z with Permanent errors detected in files

2010-04-01 Thread Ian Collins
On 04/ 2/10 03:30 PM, Andrej Gortchivkin wrote: I created the pool by using: zpool create ZPOOL_SAS_1234 raidz c7t0d0 c7t1d0 c7t2d0 c7t3d0 However now that you mentioned the lack of redundancy I see where is the problem. I guess it will then remain a mystery how did this happen, since I'm

Re: [zfs-discuss] Simultaneous failure recovery

2010-03-31 Thread Ian Collins
On 03/31/10 10:54 PM, Peter Tribble wrote: On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrockeric.schr...@oracle.com wrote: On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote: I have a pool (on an X4540 running S10U8) in which a disk failed, and the hot spare kicked in. That's perfect.

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Ian Collins
On 04/ 1/10 01:51 AM, Charles Hedrick wrote: We're getting the notorious cannot destroy ... dataset already exists. I've seen a number of reports of this, but none of the reports seem to get any response. Fortunately this is a backup system, so I can recreate the pool, but it's going to take

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Ian Collins
On 04/ 1/10 02:01 PM, Charles Hedrick wrote: So we tried recreating the pool and sending the data again. 1) compression wasn't set on the copy, even though I did sent -R, which is supposed to send all properties 2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung.

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Ian Collins
On 04/ 1/10 02:01 PM, Charles Hedrick wrote: So we tried recreating the pool and sending the data again. 1) compression wasn't set on the copy, even though I did sent -R, which is supposed to send all properties Was compression explicitly set on the root filesystem of your set? I don't

Re: [zfs-discuss] Simultaneous failure recovery

2010-03-30 Thread Ian Collins
On 03/31/10 10:39 AM, Peter Tribble wrote: I have a pool (on an X4540 running S10U8) in which a disk failed, and the hot spare kicked in. That's perfect. I'm happy. Then a second disk fails. Now, I've replaced the first failed disk, and it's resilvered and I have my hot spare back. But: why

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-28 Thread Ian Collins
On 03/29/10 10:31 AM, Jim wrote: I had a drive fail and replaced it with a new drive. During the resilvering process the new drive had write faults and was taken offline. These faults were caused by a broken SATA cable (drive checked with Manufacturers software and all ok). New cable fixed

Re: [zfs-discuss] *SPAM* Re: zfs send/receive - actual performance

2010-03-27 Thread Ian Collins
On 03/27/10 08:14 PM, Svein Skogen wrote: On 26.03.2010 23:55, Ian Collins wrote: On 03/27/10 09:39 AM, Richard Elling wrote: On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote: Hi, The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not that much

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-27 Thread Ian Collins
On 03/26/10 12:16 AM, Bruno Sousa wrote: Well...i'm pretty much certain that at my job i faced something similar.. We had a server with 2 raidz2 groups each with 3 drives, and one drive has failed and replaced by a hot spare. However, the balance of data between the 2 groups of raidz2 start to

Re: [zfs-discuss] What about this status report

2010-03-27 Thread Ian Collins
On 03/28/10 10:02 AM, Harry Putnam wrote: Bob Friesenhahnbfrie...@simple.dallas.tx.us writes: On Sat, 27 Mar 2010, Harry Putnam wrote: What to do with a status report like the one included below? What does it mean to have an unrecoverable error but no data errors? I

Re: [zfs-discuss] ZFS RaidZ to RaidZ2

2010-03-26 Thread Ian Collins
On 03/27/10 11:22 AM, Muhammed Syyid wrote: Hi I have a couple of questions I currently have a 4disk RaidZ1 setup and want to move to a RaidZ2 4x2TB = RaidZ1 (tank) My current plan is to setup 8x1.5TB in a RAIDZ2 and migrate the data from the tank vdev over. What's the best way to accomplish

Re: [zfs-discuss] ZFS where to go!

2010-03-26 Thread Ian Collins
On 03/27/10 11:32 AM, Svein Skogen wrote: On 26.03.2010 23:25, Marc Nicholas wrote: Richard, My challenge to you is that at least three vedors that I know of built their storage platforms on FreeBSD. One of them sells $4bn/year of product - petty sure that eclipses all (Open)Solaris-based

Re: [zfs-discuss] ZFS RaidZ to RaidZ2

2010-03-26 Thread Ian Collins
On 03/27/10 11:33 AM, Richard Jahnel wrote: zfs send s...@oldpool | zfs receive newpool In the OP's case, a recursive send is in order. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] *SPAM* Re: zfs send/receive - actual performance

2010-03-26 Thread Ian Collins
On 03/27/10 09:39 AM, Richard Elling wrote: On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote: Hi, The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not that much. That is about right. IIRC, the theoretical max is about 4% improvement, for MTU of 8KB. Now i

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-25 Thread Ian Collins
On 03/25/10 09:32 PM, Bruno Sousa wrote: On 24-3-2010 22:29, Ian Collins wrote: On 02/28/10 08:09 PM, Ian Collins wrote: I was running zpool iostat on a pool comprising a stripe of raidz2 vdevs that appears to be writing slowly and I notice a considerable imbalance of both free space

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-25 Thread Ian Collins
On 03/25/10 11:23 PM, Bruno Sousa wrote: On 25-3-2010 9:46, Ian Collins wrote: On 03/25/10 09:32 PM, Bruno Sousa wrote: On 24-3-2010 22:29, Ian Collins wrote: On 02/28/10 08:09 PM, Ian Collins wrote: I was running zpool iostat on a pool comprising a stripe

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Ian Collins
On 03/26/10 08:47 AM, Bruno Sousa wrote: Hi all, The more readings i do about ZFS, and experiments the more i like this stack of technologies. Since we all like to see real figures in real environments , i might as well share some of my numbers .. The replication has been achieved with the

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Ian Collins
On 03/26/10 10:00 AM, Bruno Sousa wrote: [Boy top-posting sure mucks up threads!] Hi, Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the system i have now. Regarding the performance...let's assume that a bonnie++ benchmark could go to 200 mg/s in. The possibility of

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-24 Thread Ian Collins
On 02/28/10 08:09 PM, Ian Collins wrote: I was running zpool iostat on a pool comprising a stripe of raidz2 vdevs that appears to be writing slowly and I notice a considerable imbalance of both free space and write operations. The pool is currently feeding a tape backup while receiving

Re: [zfs-discuss] snapshots as versioning tool

2010-03-22 Thread Ian Collins
On 03/23/10 09:34 AM, Harry Putnam wrote: This may be a bit dimwitted since I don't really understand how snapshots work. I mean the part concerning COW (copy on right) and how it takes so little room. But here I'm not asking about that. It appears to me that the default snapshot setup shares

Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-19 Thread Ian Collins
On 03/20/10 09:28 AM, Richard Jahnel wrote: They way we do this here is: zfs snapshot voln...@snapnow [i]#code to break on error and email not shown.[/i] zfs send -i voln...@snapbefore voln...@snapnow | pigz -p4 -1 file [i]#code to break on error and email not shown.[/i] scp /dir/file

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Ian Collins
On 03/18/10 12:07 PM, Khyron wrote: Ian, When you say you spool to tape for off-site archival, what software do you use? NetVault. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
On 03/18/10 11:09 AM, Bill Sommerfeld wrote: On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Don't panic. If zpool iostat still shows active

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Ian Collins
On 03/18/10 03:53 AM, David Dyer-Bennet wrote: Anybody using the in-kernel CIFS is also concerned with the ACLs, and I think that's the big issue. Especially in a paranoid organisation with 100s of ACEs! Also, snapshots. For my purposes, I find snapshots at some level a very important

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storgage box?

2010-03-17 Thread Ian Collins
On 03/18/10 01:03 PM, Matt wrote: Shipping the iSCSI and SAS questions... Later on, I would like to add a second lower spec box to continuously (or near-continously) mirror the data (using a gig crossover cable, maybe). I have seen lots of ways of mirroring data to other boxes which has

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
On 03/18/10 11:09 AM, Bill Sommerfeld wrote: On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go If blocks that have already been visited are freed

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Ian Collins
On 03/11/10 05:42 AM, Andrew Daugherity wrote: On Tue, 2010-03-09 at 20:47 -0800, mingli wrote: And I update the sharenfs option with rw,ro...@100.198.100.0/24, it works fine, and the NFS client can do the write without error. Thanks. I've found that when using hostnames in the

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Ian Collins
On 03/11/10 09:27 AM, Robert Thurlow wrote: Ian Collins wrote: On 03/11/10 05:42 AM, Andrew Daugherity wrote: I've found that when using hostnames in the sharenfs line, I had to use the FQDN; the short hostname did not work, even though both client and server were in the same DNS domain

Re: [zfs-discuss] . . formatted using older on-disk format . .

2010-03-10 Thread Ian Collins
On 03/11/10 03:21 PM, Harry Putnam wrote: Running b133 When you see this line in a `zpool status' report: status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. Is it safe and effective to heed the advice given

Re: [zfs-discuss] Can you manually trigger spares?

2010-03-08 Thread Ian Collins
Tim Cook wrote: Is there a way to manually trigger a hot spare to kick in? Mine doesn't appear to be doing so. What happened is I exported a pool to reinstall solaris on this system. When I went to re-import it, one of the drives refused to come back online. So, the pool imported

Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-07 Thread Ian Collins
David Dyer-Bennet wrote: For a system where you care about capacity and safety, but not that much about IO throughput (that's my interpretation of what you said you would use it for), with 16 bays, I believe the expert opinion will tell you that two RAIDZ2 groups of 8 disks each is one of

Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-07 Thread Ian Collins
Slack-Moehrle wrote: Do you have any thoughts on implementation? I think I would just like to put my Home directory on the ZFS pool and just SCP files up as needed. I dont think I need to mount drives on my mac, etc. SCP seems to suite me. One important point to note is you can only boot off

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Ian Collins
valrh...@gmail.com wrote: Does this work with dedup? Does what work? Context, Please! (I'm reading this on webmail with limited history..) If you have a deduped pool and send it to a file, will it reflect the smaller size, or will this rehydrate things first? That depends on the

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-04 Thread Ian Collins
Gary Mills wrote: We have an IMAP e-mail server running on a Solaris 10 10/09 system. It uses six ZFS filesystems built on a single zpool with 14 daily snapshots. Every day at 11:56, a cron command destroys the oldest snapshots and creates new ones, both recursively. For about four minutes

Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Ian Collins
Erwin Panen wrote: Hi, I'm not very familiar with manipulating zfs. This is what happened: I have an osol 2009.06 system on which I have some files that I need to recover. Due to my ignorance and blindly testing, I have managed to get this system to be unbootable... I know, my own fault. So

Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Ian Collins
Erwin Panen wrote: Richard, thanks for replying; I seem to have complicated matters: I shutdown the system (past midnight here :-) )and seeing your reply come in, fired it up again to further test. The system wouldn't come up anymore (dumped in maintenance shell) as it would try to import both

Re: [zfs-discuss] recovering data - howto mount rpool to newpool?

2010-03-03 Thread Ian Collins
Erwin Panen wrote: Ian, thanks for replying. I'll give cfgadm | grep sata a go in a minute. At the mo I've rebooted from 2009.06 livecd. Of course I can't import rpool because it's a newer zfs version :-( Any way to update zfs version on a running livecd? No, if you can get a failsafe

Re: [zfs-discuss] Mismatched replication levels

2010-03-01 Thread Ian Collins
Eduardo Bragatto wrote: On Mar 1, 2010, at 4:04 PM, Tim Cook wrote: The primary concern as I understand it is performance. If they're close in size, it shouldn't be a big deal, but when you've got mismatched rg's it can cause quite the performance troubleshooting nightmare. It's the same

Re: [zfs-discuss] Pool vdev imbalance

2010-02-28 Thread Ian Collins
Andrew Gabriel wrote: Ian Collins wrote: I was running zpool iostat on a pool comprising a stripe of raidz2 vdevs that appears to be writing slowly and I notice a considerable imbalance of both free space and write operations. The pool is currently feeding a tape backup while receiving

[zfs-discuss] Pool vdev imbalance

2010-02-27 Thread Ian Collins
I was running zpool iostat on a pool comprising a stripe of raidz2 vdevs that appears to be writing slowly and I notice a considerable imbalance of both free space and write operations. The pool is currently feeding a tape backup while receiving a large filesystem. Is this imbalance normal?

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-26 Thread Ian Collins
Paul B. Henson wrote: I've been surveying various forums looking for other places using ZFS ACL's in production to compare notes and see how if at all they've handled some of the issues we've found deploying them. So far, I haven't found anybody using them in any substantial way, let alone

Re: [zfs-discuss] move ZFS fs to a zone

2010-02-07 Thread Ian Collins
dick hoogendijk wrote: # zfs list rpool/www 3.64G 377G 3.64G /var/www rpool/zones 3.00G 377G24K /zones rpool/zones/anduin1.94G 377G24K /zones/anduin rpool/zones/anduin/ROOT 1.94G 377G21K legacy

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ian Collins
Henu wrote: So do you mean I cannot gather the names and locations of changed/created/removed files just by analyzing a stream of (incremental) zfs_send? That's correct, you can't. Snapshots do not work at the file level. -- Ian. ___ zfs-discuss

Re: [zfs-discuss] Obtaining zpool volume size from a C coded application.

2010-01-30 Thread Ian Collins
[cross posting is probably better than muli-posts] Petros Koutoupis wrote: As I was navigating through the source code for the ZFS file system I saw that in zvol.c where the ioctls are defined, if a program sends a DKIOCGGEOM or DKIOCDVTOV, an ENOTSUP (Error Not Supported) is returned. You

[zfs-discuss] Trends in pool configuration

2010-01-23 Thread Ian Collins
My main server doubles as a both a development system and web server for my work and a media server for home. When I built it in the early days of ZFS, drive prices were about four times current (500GB were the beading edge) and affordable SSDs were a way off so I opted for a stripe of 4 2way

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread Ian Collins
A Darren Dunham wrote: On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote: True, but I wonder how viable its future is. One of my clients requires 17 LT04 types for a full backup, which cost more and takes up more space than the equivalent in removable hard drives. What kind

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Ian Collins
Robert Milkowski wrote: On 20/01/2010 19:20, Ian Collins wrote: Julian Regel wrote: It is actually not that easy. Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare + 2x OS disks. The four

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Ian Collins
Julian Regel wrote: Until you try to pick one up and put it in a fire safe! Then you backup to tape from x4540 whatever data you need. In case of enterprise products you save on licensing here as you need a one client license per x4540 but in fact can backup data from many clients which are

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Allen Eastwood wrote: On Jan 19, 2010, at 22:54 , Ian Collins wrote: Allen Eastwood wrote: On Jan 19, 2010, at 18:48 , Richard Elling wrote: Many people use send/recv or AVS for disaster recovery on the inexpensive side. Obviously, enterprise backup systems also provide DR

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Julian Regel wrote: It is actually not that easy. Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare + 2x OS disks. The four raidz2 group form a single pool. This would provide well over 30TB of

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Joerg Schilling wrote: Ian Collins i...@ianshome.com wrote: The correct way to archivbe ACLs would be to put them into extended POSIX tar attrubutes as star does. See http://cdrecord.berlios.de/private/man/star/star.4.html for a format documentation or have a look at ftp://ftp.berlios.de

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Ian Collins
Julian Regel wrote: Based on what I've seen in other comments, you might be right. Unfortunately, I don't feel comfortable backing up ZFS filesystems because the tools aren't there to do it (built into the operating system or using Zmanda/Amanda). Commercial backup solutions are available

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Ian Collins
Joerg Schilling wrote: Ian Collins i...@ianshome.com wrote: Julian Regel wrote: Based on what I've seen in other comments, you might be right. Unfortunately, I don't feel comfortable backing up ZFS filesystems because the tools aren't there to do it (built into the operating system

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Ian Collins
Joerg Schilling wrote: Ian Collins i...@ianshome.com wrote: Sun's tar also writes ACLs in a way that is 100% non-portable. Star cannot understand them and probably never will be able to understand this format as it is not well defined for a portable program like star

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Ian Collins
Allen Eastwood wrote: On Jan 19, 2010, at 18:48 , Richard Elling wrote: Many people use send/recv or AVS for disaster recovery on the inexpensive side. Obviously, enterprise backup systems also provide DR capabilities. Since ZFS has snapshots that actually work, and you can use send/receive

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-18 Thread Ian Collins
Edward Ned Harvey wrote: Personally, I like to start with a fresh full image once a month, and then do daily incrementals for the rest of the month. This doesn't buy you anything. ZFS isn't like traditional backups. If you never send another full, then eventually the delta from

Re: [zfs-discuss] Snapshot that won't go away.

2010-01-18 Thread Ian Collins
Daniel Carosone wrote: On Mon, Jan 18, 2010 at 05:52:25PM +1300, Ian Collins wrote: Is it the parent snapshot for a clone? I'm almost certain it isn't. I haven't created any clones and none show in zpool history. What about snapshot holds? I don't know if (and doubt

Re: [zfs-discuss] Snapshot that won't go away.

2010-01-17 Thread Ian Collins
Daniel Carosone wrote: On Sun, Jan 17, 2010 at 06:21:45PM +1300, Ian Collins wrote: I have a Solaris 10 update 6 system with a snapshot I can't remove. zfs destroy -f snap reports the device as being busy. fuser doesn't shore any process using the filesystem and it isn't shared

[zfs-discuss] Snapshot that won't go away.

2010-01-16 Thread Ian Collins
I have a Solaris 10 update 6 system with a snapshot I can't remove. zfs destroy -f snap reports the device as being busy. fuser doesn't shore any process using the filesystem and it isn't shared. I can unmount the filesystem OK. Any clues or suggestions of bigger sticks to hit it with? --

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-11 Thread Ian Collins
Daniel Carosone wrote: However, with the rpool mirror in place, I can't find a way to zpool export black. It complains that the poool is busy, because of the zvol in use. This happens regardless of whether I have set the zvol submirror offline. I expected that, with the subdevice in the

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-09 Thread Ian Collins
Paul B. Henson wrote: We just had our first x4500 disk failure (which of course had to happen late Friday night sigh), I've opened a ticket on it but don't expect a response until Monday so was hoping to verify the hot spare took over correctly and we still have redundancy pending device

Re: [zfs-discuss] link in zpool upgrade -v broken

2010-01-08 Thread Ian Collins
Cindy Swearingen wrote: Hi Ian, I see the problem. In your included URL below, you didn't include the /N suffix as included in the zpool upgrade output. That's correct, N is the version number. I see it is fixed now, thanks. -- Ian. ___

Re: [zfs-discuss] ZFS Dedup Performance

2010-01-08 Thread Ian Collins
James Lee wrote: I haven't seen much discussion on how deduplication affects performance. I've enabled dudup on my 4-disk raidz array and have seen a significant drop in write throughput, from about 100 MB/s to 3 MB/s. I can't imagine such a decrease is normal. What is you data? I've

<    1   2   3   4   5   6   7   8   >