Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-13 Thread Adrian Smith
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Adrian Smith (ISUnix), Ext: 55070 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] grrr, How to get rid of mis-touched file named `-c'

2011-11-28 Thread Smith, David W.
You could list by inode, then use find with rm. # ls -i 7223 -O # find . -inum 7223 -exec rm {} \; David On 11/23/11 2:00 PM, Jason King (Gmail) jason.brian.k...@gmail.com wrote: Did you try rm -- filename ? Sent from my iPhone On Nov 23, 2011, at 1:43 PM, Harry Putnam

Re: [zfs-discuss] How to recover -- LUNs go offline, now permanent errors?

2011-07-18 Thread David Smith
Cindy, I gave your suggestion a try. I did the zpool clear and then did another zpool scrub and all is happy now. Thank you for your help. David -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] How to recover -- LUNs go offline, now permanent errors?

2011-07-15 Thread David Smith
Cindy, Thanks for the reply. I'll get that a try and then send an update. Thanks, David -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to recover -- LUNs go offline, now permanent errors?

2011-07-13 Thread David Smith
I recently had an issue with my LUNs from our storage unit going offline. This caused the zpool to get numerous errors on the luns. The pool is on-line, and I did a scrub, but one of the raid sets is degraded: raidz2-3 DEGRADED 0 0 0

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Smith, David W.
On 6/22/11 10:28 PM, Fajar A. Nugraha w...@fajar.net wrote: On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith smith...@llnl.gov wrote: When I tried out Solaris 11, I just exported the pool prior to the install of Solaris 11.  I was lucky in that I had mirrored the boot drive, so after I had

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread David W. Smith
/pci10de,376@e/pci1000,3150@0/sd@3c,0:a' whole_disk=1 create_txg=269718 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 Please let me know if you need more info... Thanks, David W. Smith

[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David W. Smith
I was recently running Solaris 10 U9 and I decided that I would like to go to Solaris 11 Express so I exported my zpool, hoping that I would just do an import once I had the new system installed with Solaris 11. Now when I try to do an import I'm getting the following: # /home/dws# zpool import

[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David Smith
I was recently running Solaris 10 U9 and I decided that I would like to go to Solaris 11 Express so I exported my zpool, hoping that I would just do an import once I had the new system installed with Solaris 11.

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David Smith
An update: I had mirrored my boot drive when I installed Solaris 10U9 originally, so I went ahead and rebooted the system to this disk instead of my Solaris 11 install. After getting the system up, I imported the zpool, and everything worked normally. So I guess there is some sort of

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David W. Smith
On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote: On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote: # /home/dws# zpool import pool: tank id: 13155614069147461689 state: FAULTED status: The pool metadata is corrupted. action: The pool cannot

Re: [zfs-discuss] Question on ZFS iSCSI

2011-06-01 Thread a . smith
Disk /dev/zvol/rdsk/pool/dcpool: 4295GB Sector size (logical/physical): 512B/512B Just to check, did you already try: zpool import -d /dev/zvol/rdsk/pool/ poolname ? thanks Andy. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread a . smith
Still i wonder what Gartner means with Oracle monetizing on ZFS.. It simply means that Oracle want to make money from ZFS (as is normal for technology companies with their own technology). The reason this might cause uncertainty for ZFS is that maintaining or helping make the open source

Re: [zfs-discuss] Monitoring disk seeks

2011-05-24 Thread a . smith
Hi, see the seeksize script on this URL: http://prefetch.net/articles/solaris.dtracetopten.html Not used it but looks neat! cheers Andy. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread a . smith
Hi, I am using FreeBSD 8.2 in production with ZFS. Although I have had one issue with it in the past but I would recommend it and I consider it production ready. That said if you can wait for FreeBSD 8.3 or 9.0 to come out (a few months away) you will get a better system as these will

Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread a . smith
On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote: My question is about the initial seed of the data. Is it possible to use a portable drive to copy the initial zfs filesystem(s) to the remote location and then make the subsequent incrementals over the network? If so, what would I

Re: [zfs-discuss] Drive i/o anomaly

2011-02-08 Thread a . smith
It is a 4k sector drive, but I thought zfs recognised those drives and didn't need any special configuration...? 4k drives are a big problem for ZFS, much has been posted/written about it. Basically, if the 4k drives report 512 byte blocks, as they almost all do, then ZFS does not detect

Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread a . smith
Basically I think yes you need to add all the vdevs you require in the circumstances you describe. You just have to consider what ZFS is able to do with the disks that you give it. If you have 4x mirrors to start with then all writes will be spread across all disks and you will get nice

Re: [zfs-discuss] ZFS percent busy vs zpool iostat

2011-01-12 Thread a . smith
Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us: What function is the system performing when it is so busy? The work load of the server is SMTP mail server, with associated spam and virus scanning, and serving maildir email via POP3 and IMAP. Wrong conclusion. I am not sure what

Re: [zfs-discuss] ZFS percent busy vs zpool iostat

2011-01-12 Thread a . smith
Ok, think I have the biggest issue. The drives are 4k sector drives, and I wasn't aware of that. My fault, I should have checked this. Had the disks for ages and are sub 1TB so had the idea that they wouldn't be 4k drives... I will obviously have to address this, either by creating a pool

Re: [zfs-discuss] System crash on zpool attach object_count == usedobjs failed assertion

2010-03-03 Thread Nigel Smith
, there is little indication of any progress being made. Maybe some other 'zfs-discuss' readers would try zdb on there pools, if using a recent dev build and see if they get a similar problem... Thanks Nigel Smith # mdb core Loading modules: [ libumem.so.1 libc.so.1 libzpool.so.1 libtopo.so.1 libavl.so.1

Re: [zfs-discuss] System crash on zpool attach object_count == usedobjs failed assertion

2010-03-03 Thread Nigel Smith
? And what device driver is the controller using? Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] crashed zpool

2010-03-01 Thread Nigel Smith
Hello Carsten Have you examined the core dump file with mdb ::stack to see if this give a clue to what happend? Regards Nigel -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Help with itadm commands

2010-02-23 Thread Nigel Smith
The iSCSI COMSTAR Port Provider is not installed by default. What release of OpenSolaris are you running? If pre snv_133 then: $ pfexec pkg install SUNWiscsit For snv_133, I think it will be: $ pfexec pkg install network/iscsi/target Regards Nigel Smith -- This message posted from

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
-for-iscsi-and-nfs-over-1gb-ethernet BTW, what sort of network card are you using, as this can make a difference. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
://www.cuddletech.com/blog/pivot/entry.php?id=820 Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Another things you could check, which has been reported to cause a problem, is if network or disk drivers share an interrupt with a slow device, like say a usb device. So try: # echo ::interrupts -d | mdb -k ... and look for multiple driver names on an INT#. Regards Nigel Smith -- This message

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread Nigel Smith
Hi Robert Have a look at these links: http://delicious.com/nwsmith/opensolaris-nas Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] Disk Issues

2010-02-16 Thread Nigel Smith
'. If Native IDE is selected the ICH10 SATA interface should appear as two controllers, the first for ports 0-3, and the second for ports 4 5. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Nigel Smith
high %b. And strange that you have c7,c8,c9,c10,c11 which looks like FIVE controllers! Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

[zfs-discuss] ..and now ZFS send dedupe

2009-11-09 Thread Nigel Smith
More ZFS goodness putback before close of play for snv_128. http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010768.html http://hg.genunix.org/onnv-gate.hg/rev/216d8396182e Regards Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS + fsck

2009-11-09 Thread Nigel Smith
to raise the priority on his todo list. Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-08 Thread Nigel Smith
/src/uts/common/io/sata/adapters/ Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS + fsck

2009-11-05 Thread Nigel Smith
Hi Robert I think you mean snv_128 not 126 :-) 6667683 need a way to rollback to an uberblock from a previous txg http://bugs.opensolaris.org/view_bug.do?bug_id=6667683 http://hg.genunix.org/onnv-gate.hg/rev/8aac17999e4d Regards Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS + fsck

2009-11-05 Thread Nigel Smith
Hi Gary I will let 'website-discuss' know about this problem. They normally fix issues like that. Those pages always seemed to just update automatically. I guess it's related to the website transition. Thanks Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Ross Smith
Ok, thanks everyone then (but still thanks to Victor for the heads up) :-) On Mon, Nov 2, 2009 at 4:03 PM, Victor Latushkin victor.latush...@sun.com wrote: On 02.11.09 18:38, Ross wrote: Double WOHOO!  Thanks Victor! Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-)

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Nigel Smith
the dev repository will be updated to snv_128. Then we see if any bugs emerge as we all rush to test it out... Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

[zfs-discuss] zfs corrupts grub

2009-10-03 Thread Terry Smith
This is opensolaris on a Tecra M5 using an 128GB SSD as the boot device. This device is partitioned into two roughly 60GB partitions. I installed opensolaris 2009.06 into the first partition then did an image update to build 124 from the dev repository. All went well so then I created a

[zfs-discuss] How to map solaris disk devices to physical location for ZFS pool setup

2009-09-15 Thread David Smith
Hi, I'm setting up a ZFS environment running on a Sun x4440 + J4400 arrays (similar to 7410 environment) and I was trying to figure out the best way to map a disk drive physical location (tray and slot) to the Solaris device c#t#d#. Do I need to install the CAM software to do this, or is

[zfs-discuss] Read about ZFS backup - Still confused

2009-09-03 Thread Cork Smith
I am just a simple home user. When I was using linux, I backed up my home directory (which contained all my critical data) using tar. I backed up my linux partition using partimage. These backups were put on dvd's. That way I could restore (and have) even if the hard drive completely went belly

Re: [zfs-discuss] Read about ZFS backup - Still confused

2009-09-03 Thread Cork Smith
Let me try rephrasing this. I would like the ability to restore so my system mirrors its state at the time when I backed it up given the old hard drive is now a door stop. Cork -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Nigel Smith
that anyone using raidz, raidz2, raidz3, should not upgrade to that release? For the people who have already upgraded, presumably the recommendation is that they should revert to a pre 121 BE. Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs

Re: [zfs-discuss] ZFS Confusion

2009-08-25 Thread Stephen Nelson-Smith
server by using  zpool history oradata That's awesome - thank you very much! S. -- Stephen Nelson-Smith Technical Director Atalanta Systems Ltd www.atalanta-systems.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] ZFS Confusion

2009-08-21 Thread Stephen Nelson-Smith
you. I'm not sure about the remote mount.  It appears to be a local SMB resource mounted as NFS?  I've never seen that before. Ah that's just a Sharity mount - it's a red herring. u0[1-4] will be the same. Thanks very much, S. -- Stephen Nelson-Smith Technical Director Atalanta Systems Ltd

Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Nigel Smith
made, or to actively help with code reviews or testing. Best Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Nigel Smith
held back on announcing the work on deduplication, as it just seems to have ramped up frustration, now that it seems no more news is forthcoming. It's easy to be wise after the event and time will tell. Thanks Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] Tunable iSCSI timeouts - ZFS over iSCSI fix

2009-07-29 Thread Ross Smith
Yup, somebody pointed that out to me last week and I can't wait :-) On Wed, Jul 29, 2009 at 7:48 PM, Davedave-...@dubkat.com wrote: Anyone (Ross?) creating ZFS pools over iSCSI connections will want to pay attention to snv_121 which fixes the 3 minute hang after iSCSI disk problems:

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Nigel Smith
David Magda wrote: This is also (theoretically) why a drive purchased from Sun is more that expensive then a drive purchased from your neighbourhood computer shop: Sun (and presumably other manufacturers) takes the time and effort to test things to make sure that when a drive says I've

[zfs-discuss] ZFS gzip Death Spiral Revisited

2009-06-13 Thread jeramy smith
I have the following configuation. My storage: 12 luns from a Clariion 3x80. Each LUN is a whole 6 disk raid-6. My host: Sun t5240 with 32 hardware threads and 16gig of ram. My zpool: all 12 luns from the clariion in a simple pool My test data: A 1 gig backup file of a ufsdump from /opt on a

[zfs-discuss] Comstar production-ready?

2009-03-03 Thread Stephen Nelson-Smith
options are there, and what advice/experience can you share? Thanks, S. -- Stephen Nelson-Smith Technical Director Atalanta Systems Ltd www.atalanta-systems.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-14 Thread Ross Smith
/message.jspa?messageID=318009 On Fri, Feb 13, 2009 at 11:09 PM, Richard Elling richard.ell...@gmail.com wrote: Tim wrote: On Fri, Feb 13, 2009 at 4:21 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us mailto:bfrie...@simple.dallas.tx.us wrote: On Fri, 13 Feb 2009, Ross Smith wrote

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-13 Thread Ross Smith
On Fri, Feb 13, 2009 at 7:41 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Fri, 13 Feb 2009, Ross wrote: Something like that will have people praising ZFS' ability to safeguard their data, and the way it recovers even after system crashes or when hardware has gone wrong. You

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-13 Thread Ross Smith
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Fri, 13 Feb 2009, Ross Smith wrote: You have to consider that even with improperly working hardware, ZFS has been checksumming data, so if that hardware has been working for any length of time, you *know

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-13 Thread Ross Smith
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Fri, 13 Feb 2009, Ross Smith wrote: You have to consider that even with improperly working hardware, ZFS has been checksumming data, so if that hardware has been working for any length of time, you *know

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-13 Thread Ross Smith
be needed. On Fri, Feb 13, 2009 at 8:59 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Fri, 13 Feb 2009, Ross Smith wrote: Thinking about this a bit more, you've given me an idea: Would it be worth ZFS occasionally reading previous uberblocks from the pool, just to check

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2009-02-12 Thread Ross Smith
Heh, yeah, I've thought the same kind of thing in the past. The problem is that the argument doesn't really work for system admins. As far as I'm concerned, the 7000 series is a new hardware platform, with relatively untested drivers, running a software solution that I know is prone to locking

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-12 Thread Ross Smith
That would be the ideal, but really I'd settle for just improved error handling and recovery for now. In the longer term, disabling write caching by default for USB or Firewire drives might be nice. On Thu, Feb 12, 2009 at 8:35 PM, Gary Mills mi...@cc.umanitoba.ca wrote: On Thu, Feb 12, 2009

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross Smith
I can check on Monday, but the system will probably panic... which doesn't really help :-) Am I right in thinking failmode=wait is still the default? If so, that should be how it's set as this testing was done on a clean install of snv_106. From what I've seen, I don't think this is a problem

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross Smith
the cache should be writing). On Fri, Feb 6, 2009 at 7:04 PM, Brent Jones br...@servuhome.net wrote: On Fri, Feb 6, 2009 at 10:50 AM, Ross Smith myxi...@googlemail.com wrote: I can check on Monday, but the system will probably panic... which doesn't really help :-) Am I right in thinking

Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Ross Smith
It's not intuitive because when you know that -o sets options, an error message saying that it's not a valid property makes you think that it's not possible to do what you're trying. Documented and intuitive are very different things. I do appreciate that the details are there in the manuals,

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Ross Smith
That's my understanding too. One (STEC?) drive as a write cache, basically a write optimised SSD. And cheaper, larger, read optimised SSD's for the read cache. I thought it was an odd strategy until I read into SSD's a little more and realised you really do have to think about your usage cases

[zfs-discuss] Verbose Information from zfs send -v snapshot

2009-01-16 Thread Nick Smith
What 'verbose information' is reported by the zfs send -v snapshot contain? Also on Solaris 10u6 I don't get any output at all - is this a bug? Regards, Nick -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs list improvements?

2009-01-10 Thread Ross Smith
Hmm... that's a tough one. To me, it's a trade off either way, using a -r parameter to specify the depth for zfs list feels more intuitive than adding extra commands to modify the -r behaviour, but I can see your point. But then, using -c or -d means there's an optional parameter for zfs list

[zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread David Smith
I was wondering if anyone has any experience with how long a zfs destroy of about 40 TB should take? So far, it has been about an hour... Is there any good way to tell if it is working or if it is hung? Doing a zfs list just hangs. If you do a more specific zfs list, then it is okay... zfs

Re: [zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread David Smith
A few more details: The system is a Sun x4600 running Solaris 10 Update 4. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread David W. Smith
On Thu, 2009-01-08 at 13:26 -0500, Brian H. Nelson wrote: David Smith wrote: I was wondering if anyone has any experience with how long a zfs destroy of about 40 TB should take? So far, it has been about an hour... Is there any good way to tell if it is working or if it is hung

Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-22 Thread Ross Smith
On Fri, Dec 19, 2008 at 6:47 PM, Richard Elling richard.ell...@sun.com wrote: Ross wrote: Well, I really like the idea of an automatic service to manage send/receives to backup devices, so if you guys don't mind, I'm going to share some other ideas for features I think would be useful.

Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-18 Thread Ross Smith
Absolutely. The tool shouldn't need to know that the backup disk is accessed via USB, or whatever. The GUI should, however, present devices intelligently, not as cXtYdZ! Yup, and that's easily achieved by simply prompting for a user friendly name as devices are attached. Now you could

Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-18 Thread Ross Smith
On Thu, Dec 18, 2008 at 7:11 PM, Nicolas Williams nicolas.willi...@sun.com wrote: On Thu, Dec 18, 2008 at 07:05:44PM +, Ross Smith wrote: Absolutely. The tool shouldn't need to know that the backup disk is accessed via USB, or whatever. The GUI should, however, present devices

Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-18 Thread Ross Smith
Of course, you'll need some settings for this so it's not annoying if people don't want to use it. A simple tick box on that pop up dialog allowing people to say don't ask me again would probably do. I would like something better than that. Don't ask me again sucks when much, much later

Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-18 Thread Ross Smith
I was thinking more something like: - find all disk devices and slices that have ZFS pools on them - show users the devices and pool names (and UUIDs and device paths in case of conflicts).. I was thinking that device pool names are too variable, you need to be reading serial numbers

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Ross Smith
Forgive me for not understanding the details, but couldn't you also work backwards through the blocks with ZFS and attempt to recreate the uberblock? So if you lost the uberblock, could you (memory and time allowing) start scanning the disk, looking for orphan blocks that aren't refernced

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Ross Smith
I'm not sure I follow how that can happen, I thought ZFS writes were designed to be atomic? They either commit properly on disk or they don't? On Mon, Dec 15, 2008 at 6:34 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Mon, 15 Dec 2008, Ross wrote: My concern is that ZFS has all

Re: [zfs-discuss] cannot mount ZFS volume

2008-12-11 Thread John Smith
Ahhh...I missed the difference between a volume and a FS. That was it...thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] cannot mount ZFS volume

2008-12-10 Thread John Smith
When I create a volume I am unable to mount it locally. I pretty sure it has something to do with the other volumes in the same ZFS pool being shared out as ISCSI luns. For some reason ZFS things the base volume is ISCSI. Is there a flag that I am missing? Thanks in advanced for the help.

Re: [zfs-discuss] zfs not yet suitable for HA applications?

2008-12-05 Thread Ross Smith
Hi Dan, replying in line: On Fri, Dec 5, 2008 at 9:19 PM, David Anderson [EMAIL PROTECTED] wrote: Trying to keep this in the spotlight. Apologies for the lengthy post. Heh, don't apologise, you should see some of my posts... o_0 I'd really like to see features as described by Ross in his

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-12-03 Thread Ross Smith
Yeah, thanks Maurice, I just saw that one this afternoon. I guess you can't reboot with iscsi full stop... o_0 And I've seen the iscsi bug before (I was just too lazy to look it up lol), I've been complaining about that since February. In fact it's been a bad week for iscsi here, I've managed

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-12-02 Thread Ross Smith
Hey folks, I've just followed up on this, testing iSCSI with a raided pool, and it still appears to be struggling when a device goes offline. I don't see how this could work except for mirrored pools. Would that carry enough market to be worthwhile? -- richard I have to admit, I've not

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-12-02 Thread Ross Smith
Hi Richard, Thanks, I'll give that a try. I think I just had a kernel dump while trying to boot this system back up though, I don't think it likes it if the iscsi targets aren't available during boot. Again, that rings a bell, so I'll go see if that's another known bug. Changing that setting

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Ross Smith
On Fri, Nov 28, 2008 at 5:05 AM, Richard Elling [EMAIL PROTECTED] wrote: Ross wrote: Well, you're not alone in wanting to use ZFS and iSCSI like that, and in fact my change request suggested that this is exactly one of the things that could be addressed: The idea is really a two stage RFE,

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Ross Smith
Hey Jeff, Good to hear there's work going on to address this. What did you guys think to my idea of ZFS supporting a waiting for a response status for disks as an interim solution that allows the pool to continue operation while it's waiting for FMA or the driver to fault the drive? I do

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Ross Smith
PS. I think this also gives you a chance at making the whole problem much simpler. Instead of the hard question of is this faulty, you're just trying to say is it working right now?. In fact, I'm now wondering if the waiting for a response flag wouldn't be better as possibly faulty. That way

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Ross Smith
No, I count that as doesn't return data ok, but my post wasn't very clear at all on that. Even for a write, the disk will return something to indicate that the action has completed, so that can also be covered by just those two scenarios, and right now ZFS can lock the whole pool up if it's

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Ross Smith
Hmm, true. The idea doesn't work so well if you have a lot of writes, so there needs to be some thought as to how you handle that. Just thinking aloud, could the missing writes be written to the log file on the rest of the pool? Or temporarily stored somewhere else in the pool? Would it be an

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Ross Smith
The shortcomings of timeouts have been discussed on this list before. How do you tell the difference between a drive that is dead and a path that is just highly loaded? A path that is dead is either returning bad data, or isn't returning anything. A highly loaded path is by definition reading

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Ross Smith
of that. On Tue, Nov 25, 2008 at 3:57 PM, Bob Friesenhahn [EMAIL PROTECTED] wrote: On Tue, 25 Nov 2008, Ross Smith wrote: Good to hear there's work going on to address this. What did you guys think to my idea of ZFS supporting a waiting for a response status for disks as an interim solution

Re: [zfs-discuss] Help recovering zfs filesystem

2008-11-07 Thread Nigel Smith
/zfs-discuss/2008-May/047270.html Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
Snapshots are not replacements for traditional backup/restore features. If you need the latter, use what is currently available on the market. -- richard I'd actually say snapshots do a better job in some circumstances. Certainly they're being used that way by the desktop team:

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
If the file still existed, would this be a case of redirecting the file's top level block (dnode?) to the one from the snapshot? If the file had been deleted, could you just copy that one block? Is it that simple, or is there a level of interaction between files and snapshots that I've

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
Hi Darren, That's storing a dump of a snapshot on external media, but files within it are not directly accessible. The work Tim et all are doing is actually putting a live ZFS filesystem on external media and sending snapshots to it. A live ZFS filesystem is far more useful (and reliable) than

Re: [zfs-discuss] diagnosing read performance problem

2008-10-30 Thread Nigel Smith
any good while that is happening. I think you need to try a different network card in the server. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-29 Thread Nigel Smith
is closed source :-( Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Nigel Smith
be interesting to do two separate captures - one on the client and the one on the server, at the same time, as this would show if the switch was causing disruption. Try to have the clocks on the client server synchronised as close as possible. Thanks Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Nigel Smith
' for the network card, just in case it turns out to be a driver bug. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] diagnosing read performance problem

2008-10-28 Thread Nigel Smith
, that's my thoughts conclusion for now. Maybe you could get some more snoop captures with other clients, and with a different switch, and do a similar analysis. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-27 Thread Nigel Smith
. If your using Solaris, maybe try 'prtvtoc'. http://docs.sun.com/app/docs/doc/819-2240/prtvtoc-1m?a=view (Unless someone knows a better way?) Thanks Nigel Smith # prtvtoc /dev/rdsk/c1t1d0 * /dev/rdsk/c1t1d0 partition map * * Dimensions: * 512 bytes/sector * 1465149168 sectors * 1465149101 accessible

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-27 Thread Nigel Smith
/2007/660/onepager/ http://bugs.opensolaris.org/view_bug.do?bug_id=5044205 Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool import problem

2008-10-27 Thread Nigel Smith
a 'zpool status') Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool import: all devices online but: insufficient replicas

2008-10-27 Thread Nigel Smith
'status' of your zpool on Server2? (You have not provided a 'zpool status') Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-27 Thread Nigel Smith
'smartctl' (fully) working with PATA and SATA drives on x86 Solaris. I've done a quick search on PSARC 2007/660 and it was closed approved fast-track 11/28/2007. I did a quick search, but I could not find any code that had been committed to 'onnv-gate' that references this case. Regards Nigel Smith

  1   2   >